You are on page 1of 770

7th International Scientific Conference

on Defensive Technologies

Mileva Mari (1875 - 1948)

PROCEEDINGS
ISNB 978-86-81123-82-9

Belgrade, 6-7 October 2016


MILITARY TECHNICAL INSTITUTE
Belgrade, Serbia

Publisher
The Military Technical Institute
Ratka Resanovia 1, 11030 Belgrade
Publisher's Representative
Col Assistant Prof. Zoran Raji, PhD (Eng)
Editor
Miodrag Lisov
Technical Editing
Dragan Kneevi
Liljana Kojiin

Printing
300 copies

CIP -
,
623.4/.7(082)(0.034.2)
66.017/.018:623(082)(0.034.2)
INTERNATIONAL Scientific Conference on
Defensive Technologies (7th ; 2016 ; Beograd)
Proceedings [Elektronski izvor] / 7th
International Scientific Conference on
Defensive Technologies, OTEH 2016, Belgrade,
06-07 October 2016 ; organized by Military
Technical Institute, Belgrade ; [editor Miodrag
Lisov]. - Belgrade : The Military
Technical Institute, 2016 (Beograd : The
Military Technical Institute). - 1
elektronski optiki disk (CD-ROM) ; 12 cm
Sistemski zahtevi: Nisu navedeni. - Nasl. sa
naslovne strane dokumenta. - Tira 300. Bibliografija uz svaki rad.
ISBN 978-86-81123-82-9
1. The Military Technical Institute
(Belgrade)
a) - b)
-
COBISS.SR-ID

7th INTERNATIONAL SCIENTIFIC CONFERENCE

ON DEFENSIVE TECHNOLOGIES

SUPPORTED BY

Ministry of Defence
www.mod.gov.rs

Organized by
MILITARY TECHNICAL INSTITUTE
1 Ratka Resanovia St., Belgrade 11000, SERBIA

www.vti.mod.gov.rs

ORGANIZING COMMITTEE

SECRETARIAT

Nenad Miloradovi, PhD, Assistant Minister for


Material Resources, Serbia, President
Major General Bojan Zrni, PhD, Head of Department
for Defence Technologies, Serbia
Major General Duan Stojanovi, Head of Department
for Planning and Development, Serbia
Brigadier General Slobodan Joksimovi, Head of
Department for Strategic Planning, Serbia
Major General Goran Zekovi, Head of the Military
Academy, Serbia
Major General Mladen Vuruna, PhD, Rector of the
University of Defence, Serbia
Vladimir Bumbairevi, PhD, Rector of the University
of Belgrade, Serbia
Branko Bugarski, PhD, Assistant Minister of the
Ministry of Education, Science and Technological
Development, Serbia
Radivoje Mitrovi, PhD, Dean of the Faculty of
Mechanical Engineering, Belgrade, Serbia
Zoran Jovanovi, PhD, Dean of the Faculty of Electrical
Engineering, Belgrade, Serbia
ore Janakovi, PhD, Dean of the Faculty of
Technology and Metallurgy, Belgrade, Serbia
Ivica Radovi, PhD, Dean of the Faculty of Security
Studies, Belgrade, Serbia
Mladen Bajagi, PhD, Dean of the Police Academy,
Belgrade, Serbia
Col. Zoran Raji, PhD, Director of the Military
Technical Institute, Serbia, Vice President
Col. Slobodan Ili, PhD, Director of the Technical Test
Centre, Serbia
Col. Stevan Radoji, PhD, Head of the Military
Geographical Institute, Serbia
Jugoslav Petkovi, JUGOIMPORT - SDPR, Belgrade,
Serbia
Mladen Petkovi, Director of "Kruik", Valjevo, Serbia
Zoran Stefanovi, Director of "Sloboda", aak, Serbia
Rado Milovanovi, Director of "Milan Blagojevi",
Luani, Serbia
Dobrosav Andri, Director of "Prvi Partizan", Uice,
Serbia
Stanoje Bioanin, Director of "Prva Iskra-namenska",
Bari, Serbia
Milojko Brzakovi, Director of "Zastava oruje",
Kragujevac, Serbia

Marija Samardi, PhD, secretary


Mirjana Nikoli, MSc
Miodrag Ivanievi, MSc
Dragan Kneevi
Jelena Pavlovi
Liljana Kojiin

IV

SCIENTIFIC COMMITTEE
Miodrag Lisov, MSc, Military Technical Institute,
Serbia, President
Dragoljub Vuji, PhD, Military Technical Institute,
Serbia
Nafiz Alemdaroglu, PhD, Middle East Technical
University, Ankara, Turkey
Major General Nikola Gelao, Director of Military Centre
of Strategic Studies, Roma, Italy
Col. Zbyek Korecki, PhD, University of Defence, Brno,
Czech Republic
Evgeny Sudov, PhD, R&D Applied Logistic Centre,
Moscow, Russia
Stevan Berber, PhD, Auckland University, New Zealand
Constantin Rotaru, PhD, Henri Coanda Air Force
Academy, Brasov, Romania
Nenad Dodi, PhD, dSPACE GmbH, Paderborn,
Germany
Kamen Iliev, PhD, Bulgarian Academy of Sciences,
Centre for National Security and Defence, Sofia, Bulgaria
Col. Stoyan Balabanov, Ministry of Defence, Defence
Institute, Sofia, Bulgaria
Greihin Leonid Ivanovi, PhD, State College of
Aviation, Minsk, Belarus
Slobodan Stupar, PhD, Faculty of Mechanical
Engineering, Belgrade, Serbia
Col. Goran Diki, PhD, University of Defence, Serbia
Col. Boban orovi, PhD, University of Defence, Serbia
Col. Nenad Dimitrijevi, PhD, Military Academy, Serbia
Col. Miodrag Regodi, PhD, Military Academy, Serbia
Lt. Col. Dragan Trifkovi, PhD, Military Academy,
Serbia
Vlado urkovi, PhD, Military Academy, Serbia
Biljana Markovi, PhD, Faculty of Mechanical
Engineering, Sarajevo, Bosnia and Herzegovina
Branko Livada, PhD, Vlatacom Institute, Belgrade,
Serbia

Stevica Graovac, PhD, Faculty of Electrical Engineering,


Belgrade, Serbia
Col. Martin Macko, PhD, University of Defence, Brno,
Czech Republic
Col. Milenko Andri, PhD, Military Academy, Sebia
Col. Dejan Ivkovi, PhD, Military Technical Institute,
Serbia
George Dobre, PhD, University Politehnica of
Bucharest, Romania
Momilo Milinovi, PhD, Faculty of Mechanical
Engineering, Belgrade, Serbia
Dragutin Debeljkovi, PhD, Faculty of Mechanical
Engineering, Belgrade, Serbia
Slobodan Jaramaz, PhD, Faculty of Mechanical
Engineering, Belgrade, Serbia
Jovan Isakovi, PhD, High Engineering School of
Professional Studies, Belgrade, Serbia
Aleksa Zejak, PhD, RT-RK Institute for Computer Based
Systems, Novi Sad, Serbia
Strain Posavljak, PhD, Faculty of Mechanical
Engineering, Banja Luka, Republic Srpska
Fadil Islamovi, PhD, Faculty of Mechanical
Engineering, Biha, Bosnia and Herzegovina
Tomaz Vuherer, PhD, University of Maribor, Slovenia
Silva Dobri, PhD, Military Medical Academy, Serbia
Elizabeta Ristanovi, PhD, Military Medical Academy,
Serbia
Zijah Burzi, PhD, Military Technical Institute, Serbia
Col. Ivan Pokrajac, PhD, Military Technical Institute,
Serbia
Mirko Kozi, PhD, Military Technical Institute, Serbia
Nikola Gligorijevi, PhD, Military Technical Institute,
Serbia
Vencislav Grabulov, PhD, IMS Institute, Serbia
Lt. Col. Ljubia Tomi, PhD, Technical Test Centre,
Serbia
Nenko Brklja, PhD, Technical Test Centre, Serbia

PREFACE
The Military Technical Institute, the first and the largest military scientific-research institution in
Serbia with a 68 year long tradition, has been traditionally organizing the OTEH scientific
conference devoted to the defensive technologies. The Conference is sponsored by the Ministry of
Defense and it takes place every second year.
Its aim is to gather scientists and engineers, researchers and designers, manufactures and university
professors in order to exchange ideas and to develop new relationships.
The Seventh International Scientific Conference OTEH 2016 is scheduled as follows: lecture on the
occasion of Mileva Mari - Einstein, plenary session with two introductory lectures, working
sessions according to the Conference topics, exhibition of some actual exhibits of the weapons and
military equipment developed by the Military Technical Institute.
The papers which will be presented at the Conference have been classified into the following
thematic fields:
Aerodynamics and Flight Dynamics
Aircraft
Weapon Systems and Combat Vehicles
Ammunition and Energetic Materials
Integrated Sensor Systems and Robotic Systems
Telecommunication and Information Systems
Materials and Technologies and CBRN Protection
Quality, Standardization, Metrology, Maintenance and Exploitation.
The Proceedings contain 134 reviewed papers which have been submitted by the authors from 15
different countries. I would also like to stress that 24 papers are from abroad. The quality of papers
accepted for publication achieved very high standard. I expect stimulated discussion on many topics
that will be presented during two days of the Conference.
On behalf of the organizer I would like to thank all the authors and participants from abroad, as well
as from Serbia, for their contributions and efforts which made this Conference possible and
successful.
I would also like to thank the Ministry of Education and Science of the Republic of Serbia for its
financial support.
Finally, dear guests and participants of the Conference, I would like to wish you an enjoyable stay
in Belgrade and I am looking forward to see you again at the Eighth Conference.
Belgrade, October, 2016

Miodrag Lisov
Chairman of the Scientific Committee

VI

CONTENTS
OCCASIONAL LECTURE
3 MILEVA MARI EINSTEIN HER LIFE, WORK AND FATE, Velimir Abramovi
PLENARY LECTURES
7 IMPLEMENTATION OF INTEGRATED LOGISTIC SUPPORT TECHNOLOGIES:
FROM LOGISTIC SUPPORT ANALYSIS UP TO PERFORMANCE BASED
LOGISTICS, Evgeny V. Sudov
9 HISTORICAL DEVELOPMENT OF MODERN SMALL ARMS TECHNOLOGY:
OAK RIDGE NATIONAL LABORATORY PERSPECTIVE, Slobodan Raji
1. SECTION : AERODYNAMICS AND FLIGHT DYNAMICS
13 EFFECT OF BASE BLEED ON THE DRAG REDUCTION, Habib Belaidouni, Saa
ivkovi, Mirko Kozi, Marija Samardi, Boutemdjet Abdelwahid
19 DIVERGENCE ANALYSIS OF THIN COMPOSITE PLATES IN SUBSONIC AND
TRANSONIC FLOWS, Mirko Dinulovi, Aleksandar Grbovi, Danilo Petrainovi
24 AEROACOUSTIC ANALYSIS OF A JET NOZZLE, Toni Ivanov, Vasko Fotev,
Neboja Petrovi, Zorana Trivkovi, Dragan Komarov
29 NUMERICAL AND EXPERIMENTAL INVESTIGATION OF AERODYNAMIC
CHARACTERISTICS OF SPIN STABILIZED PROJECTILE, Damir D. Jerkovi,
Aleksandar V. Kari, Neboja Hristov, Slobodan S. Ili, Slobodan Savi
35 A HIGH SPEED TRAIN MODEL TESTING IN T-32 WIND TUNNEL BY
INFRARED THERMOGRAPHY AND STANDARD METHODS, Slavica Risti,
Suzana Lini, Goran Ocokolji, Boko Rauo, Vojkan Luanin
41 AERODYNAMICS OF THE HIGH SPEED TRAIN BIO-INSPIRED BY A
KINGFISHER, Suzana Lini, Boko Rauo, Mirko Kozi, Vojkan Luanin, Aleksandar
Bengin
47 OBSERVATIONS ON SOME TRANSONIC WIND TUNNEL TEST RESULTS OF A
STANDARD MODEL WITH A T-TAIL, Dijana Damljanovi, ore Vukovi,
Aleksandar Viti, Jovan Isakovi, Goran Ocokolji
52 NUMERICAL AND EXPERIMENTAL ASSESSMENT OF TRANSONIC
TURBULENT FLOW AROUND ONERA M4 MODEL, Jelena Svorcan, Dijana
Damljanovi, Dragan Komarov, Slobodan Stupar, Neboja Petrovi
58 COMPUTATIONAL ANALYSIS OF HELICOPTER MAIN ROTOR BLADES IN
GROUND EFFECT, Zorana Trivkovi, Jelena Svorcan, Marija Balti, Dragan Komarov,
Vasko Fotev
64 SIMULATION OF ROLL AUTOPILOT OF A MISSILE WITH INTERCEPTORS,
Milan Ignjatovi, Milo Pavi, Slobodan Mandi, Bojan Pavkovi, Nataa Vlahovi
68 DESIGN OF THE MAIN PIVOT ON THE FORCED OSCILLATION APPARATUS
FOR THE WIND TUNNEL MEASUREMENTS, Marija Samardi, Dragan
Marinkovski, Duan uri, Zoran Raji, Abdelwahid Boutemedjet
VII

73 PRELIMINARY AERODYNAMIC COMPUTATION OF LONG ENDURANCE


UAV WING, Abdelwahid Boutemedjet, Marija Samardi, Zoran Raji
2. SECTION : AIRCRAFT
79 DEVELOPMENTS IN HEAD-UP DISPLAY TECHNOLOGY FOR BASIC AND
ADVANCED MILITARY TRAINING AIRCRAFT, Robert Wilsey Fraes
85 FLIGHT PERFORMANCE DETERMINATON OF THE PISTON ENGINE
AIRCRAFT SOVA, COMPUTER PROGRAM SOVAPERF, Nemanja Velimirovi,
Kosta Velimirovi
90 POSSIBLE APPROACHES TO EVALUATION OF TRAINING AIRCRAFTS USED
IN FLIGHT SCREENING, Slavia Vlai, Franc Hudomal, Aleksandar Kneevi
95 INTEGRATION OF TACTICAL - MEDIUM RANGE UAV AND CATAPULT
LAUNCH SYSTEM, Zoran Novakovi, Zoran Vasi, Ivana Ili, Nikola Medar, Dragan
Stevanovi
102 CONTRIBUTION TO THE MAINTENANCE OF Mi-8 HELICOPTER IN THE
SERBIAN AIR FORCE, Zoran Ili, Boko Rauo, Miroslav Jovanovi, Ljubia Tomi,
Stevan Jovii, Radomir Janji, Nenko Brklja
108 ON THE EFFECTIVE SHEAR MODULUS OF COMPOSITE HONEYCOMB
SANDWICH PANELS, Lamine Rebhi, Mirko Dinulovi, Predrag Andri, Marjan Dodi,
Branimir Krsti
114 EFFICIENT COMPUTATION METHOD FOR FATIGUE LIFE ESTIMATION OF
AIRCRAFT STRUCTURAL COMPONENTS, Stevan Maksimovi, Mirjana uri,
Zoran Vasi, Ognjen Ognjanovi
119 ANALYSIS OF AIRCRAFT STRUCTURES CROSS SECTION, Bogdan S.
Bogdanovi, Dario A. Sinobad, Tonko A. Mihovilovi
125 STRESS CALCULATION OF NOSE GEAR SUPPORT WITH ASPECT OF
WELDING OF AEROSPACE STEEL 15CRMOV6, Aleksandar Petrovi, Bogdan S.
Bogdanovi, Aleksandar Stanaev
131 INFLUENCE OF PILOTS AVERAGE BODY MASS INCREASING ON BALANCE
OF LIGHT PISTON TRAINING AIRCRAFT, Zorica Sari, Zoran Vasi, Vojislav
Devi, Boris Glava
139 PROTOTYPE SOVA DEVELOPMENT: AIRCRAFT LYFE CYCLE EXTENSION,
Vanja Stefanovi, Marija Blai, Marina Ostoji, Tonko Mihovilovi, Dragan Ili
145 SOME ASPECTS OF THE DIFFERENT TYPES WIRELESS SENSORS
IMPLEMENTATION WITHIN AIRBORNE FLIGHT TEST CONFIGURATION,
Zoran Filipovi, Vladimir Kvrgi, Dragoljub Vuji
152 UAS - FROM MINI TO TACTICAL, Adi Cohen
3. SECTION : WEAPON SYSTEMS AND COMBAT VEHICLES
157 A PRELIMINARY DESIGN MODEL FOR EXPLOSIVELY FORMED
PROJECTILES, Mohammed Amine Boulahlib, Milo Markovi, Slobodan Jaramaz,
Momilo Milinovi, Mourad Bendjaballah
VIII

163 TENDENCIES OF DEVELOPMENT OF AMPHIBIOUS ASSETS IN ARMED


FORCES OF NATO COUNTRIES, Nenad Kovaevi, Nenad Dimitrijevi
168 ON ALGORITHM OF SYNCHRONIZED SWARMING AGAINST AN ACTIVE
THREAT SIMULATOR, Radomir Jankovi, Momilo Milinovi
173 DETERMINING PROJECTILE CONSUMPTION DURING INDIRECT MORTAR
FIRE, Aca Randjelovi, Vlado Djurkovi, Petar Repi
177 PROPELLER AND SHIP MAIN EGINE SELECTION IN CORRELATION WITH
OVERALL EFFICIENCY PROPULSION COEFFICIENT IMPROVEMENT, Jovo
Dautovi, Vojkan Madi, Sonja urkovi
182 OPTIMIZATION OF PLANETARY GEARS AND EFFECTS OF THE THINRIMED GEAR ON FILLET STRESS, Milo Sedak, Tatjana M. Lazovi Kapor, Boidar
Rosi
188 PROJECTION OF QUALITY A COMPLEX TECHNICAL SYSTEM, Ljubia Tani,
Petar Jovanovi, Samed Karovi
194 STRESS ANALYSIS OF INTEGRATED 12.7 MM MACHINE GUN MOUNT,
Aleksandar Kari, Duan Jovanovi, Damir Jerkovi, Neboja Hristov
199 STRATEGY IMPLEMENTATION OF DUAL-SEMI-ACTIVE RADAR HOMING
GUIDANCE WITH COUPLING OF TANDEM GUIDED AND LEADING MISSILE
OF AIR DEFENCE MISSILE SYSTEM ON REAL MANEUVERING TARGET,
Markovi Stojan, Milinovi Momilo, Nenad Sakan
205 EXPERIMENTAL INVESTIGATION OF OILS IN FOUR-STROKE ENGINES,
Sreten Peri, Bogdan Nedi
211 OPTIMIZATION OF THE BOX SECTION OF THE SINGLE-GIRDER BRIDGE
CRANE BY GRG ALGORITHM ACCORDING TO DOMESTIC STANDARDS AND
EUROCODES, Goran Pavlovi, Vladimir Kvrgi, Stefan Mitrovi, Mile Savkovi, Neboja
Zdravkovi
218 MATHEMATICAL MODELING DYNAMIC PERFORMANCE OF ARTILLERY
FIRE SUPPORT IN THE OFFENSIVE OPERATION, Damir Projovi, Zoran
Karavidi, Miroslav Ostoji
223 MODELING AND MULTIBODY SIMULATION OF LAND ROVER DEFENDER
110 RIDE AND HANDLING DYNAMICS, Nabil Khettou, Dragan Trifkovi, Slavko
Mudeka
231 PERSPECTIVES OF USE OF SWITCHED RELUCTANCE MOTORS IN COMBAT
VEHICLES, Radoslav Rusinov
4. SECTION : AMMUNITION AND ENERGETIC MATERIALS
237 PHYSICO-CHEMICAL PROPERTIES AND THERMAL STABILITY OF
MICROCRYSTALLINE NITROCELLULOSE ISOLATED FROM WOOD FIBER,
Mohammed Amin Dali
243 A METHOD OF GUNPOWDER GRAIN SHAPE OPTIMIZATION, Stefan Jovanovi
249 COMPOSITE SOLID PROPELLANTS WITH OCTOGENE, Vesna Rodi, Marica
Bogosavljevi, Aleksandar Milojkovi, Saa Brzi
255 SOLVING TECHNICAL PROBLEMS WHILE WORKING WITH ORDNANCE
USING INNOVATION PRINCIPLES, Obrad abarkapa, Duan Raji, arija Markovi
IX

260 APPLYING OF NANOTECHNOLOGY IN PRODUCTION OF RIFLE


AMMUNITION, Mihailo Erevi, Veljko Petrovi, Branka Lukovi
266 DETERMINATION OF COMPATIBILITY OF DOUBLE BASE PROPELLANT
WITH POLYMER MATERIALS USING DIFFERENT TEST METHODS, Mirjana
Dimi, Bojana Fidanovski, Ljiljana Jelisavac, Slavia Stojiljkovi, Nataa Kariik
272 CHARACTERIZATION OF BEHIND ARMOR DEBRIS AFTER PERFORATION
OF STEEL PLATE BY ARMOR PIERCING PROJECTILE, Predrag Elek, Slobodan
Jaramaz, Dejan Mickovi, Miroslav orevi, Nenad Miloradovi
278 VISUALIZING THE THERMAL EFFECT OF THERMOBARIC EXPLOSIVES,
Uro Aneli, Danica Simi, Dragan Kneevi, Marko Devi
283 RELIABILITY OF SOLID ROCKET PROPELLANT GRAIN UNDER
SIMULTANEOUS ACTION OF MULTIPLE TYPES OF LOADS, Nikola Gligorijevi,
Saa ivkovi, Vesna Rodi, Saa Antonovi, Aleksandar Milojkovi, Bojan Pavkovi,
Zoran Novakovi
290 AN EXAMPLE OF PROPELLANT GRAIN STRUCTURAL ANALYSIS UNDER
THE THERMAL AND ACCELERATION LOADS, Saa Antonovi, Nikola
Gligorijevi, Aleksandar Milojkovi, Sredoje Suboti, Saa ivkovi, Bojan Pavkovi
297 TRANSFER OF GRANULATED PBX PRODUCTION TO THE INDUSTRIAL
SCALE, Slavica Terzi, Stanoje Bioanin, Aleksandar orevi, ivka Krsti, Biljana
Kostadinovi, Zoran Borkovi
304 EXPLOSIVE REACTIVE ARMOR ACTION AGAINST SHAPED CHARGE JET,
Dejan Mickovi, Slobodan Jaramaz, Predrag Elek, Nenad Miloradovi, Dragana Jaramaz,
Duan Mickovi
310 AMMUNITION SURPLUS - THREAT TO POSSESSORS DISPOSAL METHODS:
REVIEW OF DEMILITARIZATION TECHNOLOGIES, Bla Miheli
324 SHOCKWAVE OVERPRESSURE OF PROPELLANT GASES AROUND THE
MORTAR, Miodrag Lisov, Slobodan Jaramaz, Mirko Kozi, Novica Ristovi
5. SECTION : INTEGRATED SENSOR SYSTEMS AND ROBOTIC SYSTEMS
331 ACOUSTIC SOURCE LOCALIZATION USING A DISCRETE PROBABILITY
DENSITY METHOD FOR POSITION DETERMINATION, Ivan Pokrajac, Nadica
Kozi, Predrag Okiljevi, Miodrag Vraar, Brusin Radiana
336 STATISTICAL APPROACH IN DETECTION OF AN ACOUSTIC
BLAST WAVE, Miodrag Vraar, Ivan Pokrajac
340 ADAPTIVE TIME VARYING AUTOPILOT DESIGN, Nataa Vlahovi, Stevica
Graovac, Milo Pavi, Milan Ignjatovi
345 MATHEMATICAL MODEL FOR PARAMETER ANALYSIS OF PASSIVELY QSWITCHED Nd:YAG LASERS, Mirjana Nikoli, eljko Vukobrat
350 HFSW RADAR DESIGN: TACTICAL, TECHNOLOGICAL AND
ENVIRONMENTAL CHALLENGES, Dejan Nikoli, Bojan Doli, Nikola Tosi, Nikola
Leki, Vladimir D. Orli, Branislav M. Todorovi
356 EFFECTIVENESS OF ACTIVE VIBRATION CONTROL OF A FLEXIBLE BEAM
USING A DIFFERENT POSITION OF STRAIN GAGE SENSORS, Miroslav Jovanovi,
Aleksandar Simonovi, Neboja Luki, Nemanja Zori, Slobodan Stupar, Slobodan Ili
362 INFLUENCE OF GEOMETRICAL PARAMETERS ON PERFORMANCE OF
X

MEMS THERMOPILE BASED FLOW SENSOR, Danijela Randjelovi, Olga Jaki,


Mile M. Smiljani, Predrag Poljak, arko Lazi
367 MONITORING PHYSIOLOGICAL STATUS OF THE SOLDIER DURING
COMBAT MISSION VIA INTEGRATED MEDICAL SENSOR (HEART RATE,
OXYGEN SATURATION) SYSTEM, Oliver Mladenovski, Jugoslav Ackoski, Milan
Goci
371 ALUMINIUM TILES DEFECTS DETECTION BY EMPLOYING PULSED
THERMOGRAPHY METHOD WITH DIFFERENT THERMAL CAMERAS, Ljubia
Tomi, Vesna Damnjanovi, Goran Diki, Boban Bonduli, Bojan Milanovi, Rade Pavlovi
377 CHANNEL SELECTOR FOR OPTIMIZATION OF TEST AND CALLIBRATION
PROCEDURES OF ICTM PRESSURE SENSORS, Predrag Poljak, Milo Vorkapi,
Danijela Randjelovi
381 SECURITY SYSTEM IN MILITARY BASES WITH MATLAB ALGORITHM,
Tamara Gjonedva, Sofija Velinovska, Jugoslav Achkoski, Boban Temelkovski
385 IMAGING DETECTOR TECHNOLOGY: A SHORT INSIGHT IN HISTORY AND
FUTURE POSSIBILITIES, Branko Livada, Dragana Peri
391 IMAGE QUALITY PARAMETERS: A SHORT REVIEW AND APPLICABILITY
ANALYSIS, Jelena Koci, Ilija Popadi, Branko Livada
398 MULTI-SENSOR SYSTEM OPERATORS CONSOLE: TOWARDS STRUCTURAL
AND FUNCTIONAL OPTIMIZATION, Dragana Peri, Saa Vuji, Branko Livada
404 STATIONARY ON-ROAD OBSTACLES AVOIDANCE BASED ON COMPUTER
VISION PRINCIPLES, Mourad Bendjaballah, Stevica Graovac, Mohammed Amine
Boulahlib, Milo Markovi
411 GPS AIDED INS WITH GYRO COMPASSING FUNCTION, Ivana Trajkovski, Nada
Asanovi, Vladimir Vukmirica, Milan Miloevi
417 MODERNIZATION OF THE RADAR P12, Verica Marinkovi Nedelicki, Branislav
Pavi, Boris Mikovi, Mladen Mileusni, Predrag Petrovi, Aleksandar Lebl, Dragan
Borjan, Dejan Ivkovi, Dragan Nikoli
422 DISTRIBUTED TARGET TRACKING IN CAMERA NETWORKS USING AN
ADAPTIVE STRATEGY, Nemanja Ili, Khaled Obaid Al Ali, Milo S. Stankovi, Srdjan
S. Stankovi
428 AUTONOMOUS MOBILE ROBOT PATH PLANNING IN COMPLEX AND
DYNAMIC ENVIRONMENTS, Novak Zagradjanin, Stevica Graovac
434 SENSORLESS BRUSHED DC MOTOR SPEED CONTROL USING NATURAL
TRACKING CONTROL ALGORITHM, Milo Pavi, Milan Ignjatovi, Nataa
Vlahovi, Mirko Miljen
6. SECTION : TELECOMMUNICATION AND INFORMATION SYSTEMS
441 EVALUATION OF SELF-ORGANIZING UAV NETWORKS IN NS-3, Nataa Maksi,
Milan Bjelica
446 CONCEPTUALIZING SIMULATION FOR LAWSONS MODEL OF COMMAND
AND CONTROL PROCESSES, Neboja Nikoli
451 GENERATING EFFECTIVE JAMMING AGAINST GLOBAL NAVIGATION
SYSTEMS, Sergei Kostromitsky, Aliaksandr Dyatko, Petr Shumski, Yury Rybak
XI

457 EFFICIENT POWER FLOW ALGORITHM, MODIFIED ALGORITHM NAHMAN


AND PERI, Branko Stojanovi, Milan Moskovljevi, Tomislav Raji
462 SOLID STATE L-BAND HIGH POWER AMPLIFIER USING GAN HEMT
TECHNOLOGY, Zvonko Radosavljevi, Dejan Ivkovi, Dragan Nikoli
466 PERFORMANCE EVALUATION OF NONLINEAR OPTIMIZATION METHODS
FOR TOA LOCALIZATION TECHNIQUES, Maja Rosi, Mirjana Simi, Predrag
Pejovi
472 GPU-BASED PREPROCESSING FOR SPECTRUM SEGMENTATION IN
DIRECTION FINDING, Marko Mii, Ivan Pokrajac, Nadica Kozi, Predrag Okiljevi
478 TECHNIQUES FOR INTELLIGENCE DATA GATHERING IN MOBILE
COMMUNICATIONS, Saa Stojkovi, Ivan Tot, Fejsov Nikola
481 AN IMPLEMENTATION OF MANET NETWORKS ON COMMAND POST
DURING MILITARY OPERATIONS, Vladimir Risti, Boban Z. Pavlovi, Saa Devetak
486 PRACTICAL IMPLEMENTATION OF DIGITAL DOWN CONVERSION FOR
WIDEBAND DIRECTION FINDER ON FPGA, Vuk Obradovi, Predrag Okiljevi,
Nadica Kozi, Dejan Ivkovi
494 STATISTICS OF RATIO OF TWO WEIBULL RANDOM VARIABLES WITH
DIFFERENT PARAMETERS, Ivica Marjanovi, Dejan Rani, Danijela Aleksi, Dejan
Mili, Mihajlo Stefanovi
500 SOFTWARE AND INFORMATIONAL SYSTEMS IN THE PRODUCTION OF
DTM25 OF THE MILITARY GEOGRAPHICAL INSTITUTE, Aleksandar Pavlovi,
Viktor Markovi, Ana Vuievi, Saa Bakra
7. SECTION : MATERIALS, TECHNOLOGIES AND CBRN PROTECTION
507 THERMAL STABILITY AND MAGNETIC PROPERTIES OF -FE2O3
POLYMORPH, Violeta N. Nikoli, Marin Tadi, Vojislav Spasojevi
513 TECHNOLOGY FOR COMBATING BIOTERRORISM, Elizabeta Ristanovi
517 ESTIMATION OF SAFT AND PC-SAFT EOS PARAMETERS FOR N-HEPTANE
UNDER HIGH PRESSURE CONDITIONS, Jovana Ili, Mirko Stijepovi, Aleksandar
Gruji, Jasna Staji Troi, Gorica Ivani, Mirjana Kijavanin
522 THE APPLICATION OF IR THERMOGRAPHY FOR THE CRACKS DETECTION
IN THE COMPOSITE STRUCTURES USED IN AVIATION, Stevan Jovii, Ivana
Kosti, Zoran Ili, Ljubia Tomi, Aleksandar Kovaevi
525 THERMAL AND CAMOUFLAGE PROPERTIES OF ROSALIA ALPINA
LONGHORN BEETLE WITH STRUCTURAL COLORATION, Ivana Kosti, Danica
Pavlovi, Vladimir Lazovi, Darko Vasiljevi, Dejan Stojanovi, Dragan Kneevi, Ljubia
Tomi, Goran Diki, Dejan Panteli
530 ISOGEOMETRIC ANALYSIS OF FREE VIBRATION OF ELLIPTICAL
LAMINATED COMPOSITE PLATES USING THIRD ORDER SHEAR
DEFORMATION THEORY, Ognjen Pekovi, Slobodan Stupar, Aleksandar Simonovi,
Toni Ivanov
536 ON THE CORRELATION OF MICROHARDNESS WITH THE FILM ADHESION
FOR SOFT FILM ON HARD SUBSTRATE COMPOSITE SYSTEM, Jelena Lamovec,
Vesna Jovi, Ivana Mladenovi, Bogdan Popovi, Milo Vorkapi, Vesna Radojevi
541 A COMPARISON OF DIFFERENT CONVEX CORNER COMPENSATION
XII

STRUCTURES APPLICABLE IN ANISOTROPIC WET CHEMICAL ETCHING OF


{100} ORIENTED SILICON, Vesna Jovi, Jelena Lamovec, Mile Smiljani, arko Lazi,
Bogdan Popovi, Predrag Poljak
547 RADIOACESIUM-137 IN THE ENVIRONMENT AND THE EFFECT OF
RADIATION-HYGIENE CERTIFICATION ON FOOD, Nataa Paji, Tatjana Markovi
550 SEPARATION OF THE CARBON-DIOXIDE FROM THE GAS MIXTURE, Dragutin
Nedeljkovi, Lana Puti, Aleksandar Staji, Aleksandar Gruji, Jasna Staji-Troi
556 ELECTRODEPOSITION OF METAL COATINGS FROM EUTECTIC
TYPE IONIC LIQUID, Mihael Buko, Jelena B. Bajat
561 IMPACT OF THE ALTERED TEXTURE OF THE ACTIVE FILLING OF THE
FILTER ON THE SORPTIVE CHARACTERISTICS WITH THE SPECIAL
REFERENCE TO THE EFFECIENCY OF FILTERING, Marina Ili, eljko Seni,
Vladimir Petrovi, Biljana Mihajlovi, Vukica Grkovi
567 INFLUENCE OF DAMAGED INJECTORS USED IN COMMON RAIL SYSTEMS
ON ECOLOGICAL AND ENEGRGY EFFICIENY, Dejan Jankovi, Mileta Ristivojevi,
Dimitrije Kosti
572 DEFECT DURING PRODUCTION OF STEEL CARTRIDGE CASE, Nada Ili,
Ljubica Radovi
577 FAILURE ANALYSIS OF THE STATOR BLADE, Jelena Marinkovi, Duan Vraari,
Ljubica Radovi, Ivo Blai
581 READOUT BEAM COUPLING STRATEGIES FOR PLASMONIC CHEMICAL OR
BIOLOGICAL SENSORS, Zoran Jaki, Mile M. Smiljani, arko Lazi, Dana
Vasiljevi Radovi, Marko Obradov, Dragan Tanaskovi, Olga Jaki
587 COMPLETE KINETIC PROFILING OF THE THREE NANOMOLAR
ACETYLCHOLINESTERASE INHIBITORS, Maja Vitorovi-Todorovi, Mirjana
Jakii, Sonja Bauk, Branko Drakuli
594 DEPENDANCE OF CBRN INSULATING MATERIALS PROTECTION TIME
UPON BUTYL-RUBBER AND FLAME RETARDANT CONTENT, Vukica Grkovi,
Vladimir Petrovi, eljko Seni, Maja Vitorovi-Todorovi
598 FILTERING HALF MASKS USAGE FOR PROTECTION AGAINST AEROSOL
CONTAMINATION OF BIOLOGICAL AGENTS, Negovan Ivankovi, Duan Raji,
Radovan Karkali, Dejan Indji, Duan Jankovi, eljko Seni, Marina Ili
603 LASERS POSSIBILITIES IN BRASS SURFACE CLEANING, Bojana Radojkovi, Slavica
Risti, Suzana Poli, Bore Jegdi, Aleksandar Krmpot, Branislav Salati, Filip Vueti
609 EFFECT OF IF-WS2 NANOPARTICLES ADDITION ON PHYSICALMECHANICAL AND RHEOLOGICAL PROPERTIES AND ON CHEMICAL
RESISTANCE OF POLYURETHANE PAINT, Dragana S. Lazi, Danica M. Simi,
Aleksandra D. Samolov
614 THERMAL ANALYSIS OF NANOCRYSTALLINE NIFE2O4 PHASE FORMATION
IN SOLID STATE REACTION, Vladan osovi, Aleksandar osovi, Toma ak,
Nadeda Talijan, Duko Mini, Dragana ivkovi
618 PRELIMINARY ANALYSIS OF THE POSSIBILITY OF PREPARING PVB/IF-WS2
COMPOSITES. EFFECT OF NANOPARTICLES ADDITION ON THERMAL AND
RHEOLOGICAL BEHAVIOR OF PVB, Danica M. Simi, Duica B. Stojanovi, Mirjana
Dimi, Ljubica Totovski, Saa Brzi, Petar S. Uskokovi, Radoslav R. Aleksi
XIII

624 HIGH PERFORMANCE LIQUID CHROMATOGRAPHY DETERMINATION OF


2,4,6-TRINITROTOLUENE IN WATER SOLUTION, Jovica Nei, Ljiljana Jelisavac,
Aleksandar Marinkovi, Slavia Stojiljkovi
630 MEASURING CLEANING CLASS OF OIL AFTER TRIBOLOGICAL TESTING,
Radomir Janji, Slobodan Mitrovi, Dragan Duni, Ivan Maui, Blaa Stojanovi, Milan
Bukvi, Zoran Ili
636 RECYCLING LITHIUM - ION BATTERY, Milan Bukvi, Radomir Janji, Blaa
Stojanovi
642 NUMERICAL CALCULATION OF J-INTEGRAL USING FINITE ELEMENTS
METHOD, Bahrudin Hrnjica, Fadil Islamovi, Denana Gao, Esad Bajramovi
646 INFLUENCE OF DIFFERENT TYPES OF POLYMER IMPREGNATION ON
SPECTRAL REFLECTION OF TEXTILLE MATERIALS, Aleksandra Samolov, Milan
Kuli
649 QUALITY OF RECOVERED EXPLOSIVES OBTAINED FROM DELABORATED
MUNITIONS, Maja Matovi, Ljiljana Bundalo
654 THE STRENGTH INVESTIGATION OF SPECIFIC POLYMERIC COMPOSITE
ELEMENT/METALLIC ELEMENT JOINT REALIZED BY PINS, Slobodan
itakovi, Jovan Radulovi
659 QUALITATIVE AND QUANTITATIVE ASSESSMENT OF BOND STRENGTH OF
SOLID ROCKET PROPELLANT AND THERMOPLASTIC MATERIAL FOR
CARTRIDGE LOADED GRAIN, Jovan Radulovi
665 SYNTHESIS OF RE/PD HETEROGENEOUS CATALYSTS SUPPORTED ON HMS
USING SOL-GEL METHOD FOLLOWED BY SUPERCRITICAL DRYING WITH
EXCESS SOLVENT, Dragana Proki Vidojevi, Sandra B. Glii, Aleksandar M. Orlovi
671 UNDERSTANDING PLASMA SPRAYING PROCESS AND APPLICATION IN
DEFENSE INDUSTRY, Bogdan Nedi, Marko Jankovi
678 THERMAL STABILITY AND MICROSTRUCTURAL CHANGES INDUCED BY
ANNEALING IN NANOCRYSTALLINE FE72CU1V4SI15B8 ALLOY, Radoslav Surla,
Milica Vasi, Neboja Mitrovi, Ljubica Radovi, Ljubica Totovski, Dragica Mini
682 LOW LEVEL TRITIUM DETERMINATION IN ENVIRONMENTAL SAMPLES
USING 1220 QUANTULUS, Nevena Zdjelarevi, Marija Leki, Nataa Lazarevi
685 OPTIMIZATION AND VIRTUAL QUALITY CONTROL OF A CASTING, Sreko
Manasijevi, Radomir Radia, Janez Pristravec, Velimir Komadini, Zoran Radosavljevi
8. SECTION : QUALITY, STANDARDIZATION, METROLOGY,
MAINTENANCE AND EXPLOITATION

695 RELIABILITY PREDICTION OF ELECTRONIC EQUIPMENT: PROBLEMS AND


EXPERIENCE, Slavko Pokorni
701 APPLICATION OF INNOVATION STANDARDS IN THE FIELD OF WEAPONRY
AND MILITARY EQUIPMENT, Duan Raji, Obrad abarkapa
705 SURFACE TEXTURE FILTRATION INTERNATIONAL STANDARDS AND
FILTRATIONS TECHNIQUE OVERVIEW, Srdjan ivkovi, Branka Lukovi, Veljko
Petrovi
XIV

710 SYSTEM FOR REMOTE MONITORING AND CONTROL OF HF-OTH RADAR,


Bojan Doli, Dejan Nikoli, Nikola Tosi, Nikola Leki, Vladimir D. Orli, Branislav M.
Todorovi
715 MODEL OF IMPROVING MAINTENANCE OF TELECOMMUNICATION
DEVICES, Vojkan Radonji, Milenko iri, Branko Resimi, Ivan Milojevi
721 USAGE AN INFRARED THERMOGRAPHY FOR THE PROCESS CONDITIONBASED MAINTENANCE OF SHIPS SYSTEMS, Veselin Mrdak
727 VOLUMETRIC CALIBRATION FOR IMPROVING ACCURACY OF AFP/ATL
MACHINES, Samoil Samak, Igor Dimovski, Vladimir Dukovski, Mirjana Trompeska
733 DIAGNOSTIC APPROACH TO THE MAINTENANACE OF MARINE SYSTEMS,
Duan Cincar
739 MAINTENANCE OF HYBRID VEHICLES, Blaa Stojanovi, Milan Bukvi, Radomir
Janji
745 A NEW APPROACH TO CREATING AND MANAGING TECHNICAL
PUBLICATIONS FOR AIRCRAFT LASTA USING S1000D STANDARD, Branko
Dragi, Vojislav Devi, Miodrag Ivanievi
750 NEW ISSUE OF STANDARD AS/EN 9100:2016, EXPECTATION AND BENEFITS
FOR CUSTOMERS, Biljana Markovi

XV

OCCASIONAL LECTURE

MILEVA MARI EINSTEIN HER LIFE, WORK AND FATE


Prof. dr VELIMIR ABRAMOVI

Mileva Mari (1875-1948) a Serbian mathematician and physicist, was Einstein's colleague,
confidante and wife. 1896, at the age of twenty-one, as the only woman in that year beginning
studies in the mathematical section, Mileva entered the Swiss Federal Polytechnic, Zurich, the same
date as Einstein, who was three-and-a-half years younger than her. They married 1903. Doubtless,
after their marriage, Mileva subordinated her professional goals to Einstein's. There are strong
indications and evidences that her influence in Einstein's most creative time to come was enormous.
But, it is still on the way to properly research and determine to what extent mathematician Mileva
Mari Einstein truly contributed to Brownian motion, (Einstein's doctoral thesis, 1906),
Electrodynamics of moving bodies 1905, (while working on the subject, Einstein wrote to Mileva:
our work on relative motion), then, Photo-effect explanation 1905, and energy to matter
conversion formula (E=mc2) which Einstein described 1919 as the most important upshot of the
special theory of relativity). For the sake of scientific truth clarity we have to single out, recover
and evaluate Milevas original mathematical and physical ideas that underlined the Theoretical
Physics of XX century.
There is a philosophical saying that one who really does the work is rarely credited for it.
Overshadowed by the worldwide famous and self-centered personality of her great husband Albert,
after divorce 1919, due to tiring and painful family obligations, Mileva never again could raise
enough strength and motivation to return to scientific problems by herself.

PLENARY LECTURES

IMPLEMENTATION OF INTEGRATED LOGISTIC SUPPORT


TECHNOLOGIES: FROM LOGISTIC SUPPORT ANALYSIS UP TO
PERFORMANCE BASED LOGISTICS
Dr. SCI. (TECH) EVGENY V. SUDOV
R&D Center Applied Logistics

Abstract:
Integrated Logistic support (ILS) - is a combination of modern management, engineering and
information technologies dedicated to development and support of system of exploitation of
machine-building products. The core of ILS is a logistic support analysis (LSA). This is a complex
of formal tasks around modelling of product and maintenance procedures needed to keep the
product in necessary condition. Such model allows to estimate expected availability of the product
and value of corresponding maintenance costs. During operation of product this model may be
tuned up on the base of operational statistics. Some parameters of exploitation system may be also
changed, such as maintenance tasks conditions and intervals, stratification of tasks between levels
of repair, material management algorithms and so on.
It's quite obvious, practical implementation all this sophisticated methods requires great efforts as
well as it requires methodical and normative support.
In 2006-2016 in Russian Federation are developed and implemented more than 30 National
standards in the area of ILS. At the same time a number of software tools for LSA and creation of
interactive maintenance / repair documentation were developed and implemented in different
branches of industry (aircraft, shipbuilding, defense).
All this accumulated practices and experience allows to get close to new model of relation between
Customer and Supplier, when a subject of the contract are not only services, but also the
achievement of certain indicators of readiness (availability). Such modern approach now is known
as Performance Based logistics.

HISTORICAL DEVELOPMENT OF MODERN SMALL ARMS


TECHNOLOGY: OAK RIDGE NATIONAL LABORATORY PERSPECTIVE
SLOBODAN RAJI
Oak Ridge National Laboratory, Oak Ridge TN, USA, rajics@ornl.gov
Abstract:
It can be argued that the development of modern small arms had its origins in 1944 Nazi Germany with the introduction
of the StG44. This first assault rifle was intended to replace both the bulky long-range battle rifle and the pistol caliber
submachine gun with a single weapon. This idea of an intermediate caliber mated with a large ammunition capacity and
full automatic capability revolutionized the individual combat weapon. For the next forty years virtually all individual
weapons were directly based on this gas-operated and auto loading design, including the two most ubiquitous assault
rifles of the west and east, the AR and AK platforms, respectively. Although a limited amount of research was ongoing,
very few original ideas were fielded during that period. The significant lack of advancement in small arms technology
during this time was in large contrast to the exploding new capabilities of modern air forces and navies.In the 1980s it
seemed that some advanced technology, available to the individual soldier, was finally being developed for actual
implementation. The Heckler & Koch G11 and the Barrett Firearms 82A1 provided decidedly new technology and
novel capabilities. However, even though the G11 was advanced in many ways, its high cost and final deployment near
what turned out to be the end of the cold war reduced it to a mere research and development demonstration. Although
the 82A1 was adopted by the United States (US) military due to its ability to extend the individual soldiers reach from
~1,000 m to ~2,500 m, its lack of advanced lightweight materials resulted in its adoption in relatively small numbers.
After another 25 years, only minor modifications to the AR and AK weapon systems have been made. The only major
advancement to individual soldier weapons over many decades has been the XM25 grenade launcher, a highly
sophisticated weapon and fire-control system that can change battlefield tactics. Although very expensive, this system
is the first significant advancement in small arms technology since the AR and AK platforms were first developed,
about 60 and 70 years ago, respectively. Even though modern air forces would never consider using, even in their
reserve inventories, platforms that were nearly 70 years old, the worlds premier armies seem resolved and destined to
soon be using nearly century-old small arms technology. This lack of small arms development and implementation
during the cold war is difficult to understand since this time period represented the largest arms race in the history of the
world. The typical answer given by some to explain this situation is the relatively large cost associated with regularly
replacing the entire small arms portfolio. However with a present global defense budget of approximately $2T, and the
US representing about 40 percent of that amount, it seems inexplicable based solely on cost. The US spends less than
one percent of its research and development budget on small arms. US casualty data from recent wars, Iraq and
Afghanistan, show that the probability of a US soldier expiring from a rifle bullet is much larger than that from an
explosive device. If the many billions of dollars spent on detecting and defeating improvised explosive devices was
justified, then the extremely small amounts spent on small arms development is particularly difficult to explain. Due to
the very advanced present technological status of sensors, materials, processing power, manufacturing, and battery
storage, it seems clear that a much higher level of small arms capability can easily be implemented. At a minimum it
should be a goal of a military to produce small arms that can have the projectile point-of-impact coincident with the
aim-point (zeroed) under all condition and all times. Presently, weapons are zeroed at some known distance. The
problem is that at any other distance and environmental condition, the correct aim-point has to be estimated by the skill
of the operator, a challenging task for a young and inexperienced soldier. The Oak Ridge National Laboratory has been
developing advanced technologies related to small arms applications for many years in response to the concerns stated
above. We have developed rapid manufacturing methods that can take advantage of exotic new materials and alloys.
We have also developed surface treatments that are lubricous or superhydrophobic and improve both maintenance and
reliability of existing small arms. We have also been able to apply the knowledge gained during our vast history of
sensor development to the small arms field and have developed a barrel deflection sensor with reticle compensation in
real-time. Very recently we have developed a projectile tracking approach that can terminally track large caliber sniper
bullets. Since small arms are often the weapon of choice for many terrorists, ORNL is now focusing on the development
of small arms technologies that will provide soldiers with weapons that possess a large technological overmatch
capability. With the new hot war of terrorism on the rise recently, hopefully the next 70 years will be more
technologically productive in terms of small arms development compared to the cold war.

SECTION I

Aerodynamics and Flight Dynamics

CHAIRMAN
Marija Samardi, PhD
Duan uri, PhD

EFFECT OF BASE BLEED ON THE DRAG REDUCTION


HABIB BELAIDOUNI
Military Academy, University of Defense, Belgrade, h_belaidouni@yahoo.fr
SAA IVKOVI
Military Technical Institute, Belgrade, sasavite@yahoo.com
MIRKO KOZI
Military Technical Institute, Belgrade, mkozic@open.telekom.rs
MARIJA SAMARDI
Military Technical Institute, Belgrade, majasam@ptt.rs
ABDELWAHID BOUTEMEDJET
Military Academy, University of Defense, Belgrade, abdelwahed1954@gmail.com

Abstract: One of the most important aerodynamic performance characteristics for the artillery projectiles shell is the
total drag. The total drag of projectile can be divided into three components consisted of pressure drag, viscous drag,
and the base drag. The base drag is major contributor to the total drag, for that reason its important to have its good
estimation in the preliminary design stage of a projectile. Projectiles with base bleed use concept of reducing base
drag, by injection of gas generated by burning of composite propellant, into the base area. In the paper, the internal
ballistic calculation of existing base bleed configuration is presented. Using CFD computations various turbulence
models were tested. The numerical results were compared and validated with semi empirical theory. Obtained
numerical results with the most appropriate turbulence model served in further CFD calculation of aerodynamic drag
of the projectile with and without base bleed effect.
Keywords: Base drag, Base bleed, Computational fluid dynamics.
In recirculation region the point along the axis of
symmetry, where the streamwise velocity diminish, is so
called shear layer reattachment point. As the shear layer
reattaches, the flow is forced to turn along the axis of
symmetry, causing the formation of a reattachment shock.

1. INTRODUCTION
Aerodynamic bodies such as projectiles, missiles and
rockets, generally, undergo significant deterioration of
flight performance by the drag. For these types of flight
bodies, especially the drag in the base region has the most
significant contribution to total drag. At transonic speeds,
for example, base drag constitutes a major portion up to
50% of the total drag for typical projectiles at Mach 0.9
[1]. Therefore the base drag should be considered
separately from the other pressure drag components. For
this reason, the minimization of base drag has been an
important issue to date, and considerable effort has been
made to find suitable techniques for obtaining low base
drag shell design.

Picture 2 shows that injecting small amounts of gas into


the flow field behind the base of the projectile will split
the originally large recirculation zone into two parts. One
recirculation region remains at the symmetry axis, and the
other one is formed right behind the base corner [4-7]. As
the mass flow rate increases, the recirculation zone at the
axis is pushed further out, and the other one at the base
corner becomes larger. If the mass flow rate is increased
away, the recirculation region near the axis disappears,
and the base bleed follows a straight path.

During projectile flight, directly behind the base, the


reverse flow can be seen (Picture 1).

2. NUMERICAL MODEL

The large turning angle behind the base causes separation


and formation of reverse flow known as the recirculation
region or the separation bubble [2-3]. The size of the
recirculation determines the turning angle of the external
flow, and therefore the strength of the expansion waves.
A smaller recirculation region causes the flow sharp
turning, leading to a stronger expansion wave, and lower
pressures behind the base. Therefore, small separated
region causes larger base drag than large regions.

The objective of the present study is to estimate


numerically the coefficient of drag at different Mach
number and angles of attack for 122 mm supersonic
artillery shell.
Analysis was done numerically using the software Gambit
2.4 and ANSYS FLUENT [8].

13

OTEH2016

EFFECTOFBASEBLEEDONTHEBASEDRAGREDUCTION

Picture 1. Flow flied behind a projectile without base bleed [8]

Picture 2. Flow flied behind a projectile with base bleed [8]


The geometry of projectile and meshing were made using
GAMBIT 2.4. A computational domain of about 65000
cells structured mesh was constructed around the

projectile with base bleed. The computational domain


was set up with dimensions of 40 d, where d is the model
diameter, downwards from the base and 25 d upwards
14

OTEH2016

EFFECTOFBASEBLEEDONTHEBASEDRAGREDUCTION

Pressure far field boundary condition, as shown in Picture


3, has been used. Calculation was performed using twodimensional axisymmetric, density based solver with
various turbulence models and second order upwind
discretization of the flow and turbulence equations.

from the model axis, to ensure free-stream conditions and


thus to obtain better convergence. The aerodynamic
coefficients were the criterion of convergence in all cases.

Several turbulence models were investigated:


The two equations realizable k- turbulence model,
which solves transport equations for the turbulence
kinetic energy k, and its rate of dissipation ,
the three equations k-kl- turbulent viscosity model
solving turbulent kinetic energy, laminar kinetic energy
and specific dissipation rate,
and the last one was two-equation SST k- model
which solving shear-stress transport equations.
Grid in the whole numerical domain for 122 mm
projectile can be seen in Picture 4.

Picture 3. Computational grids [9]

Picture 4. Mesh of numerical domain for 122 mm projectile

3. INTERNAL BALLISTIC CALCULATION

The gas generator contains two identical solid propellant


grains. These two elements provide an internal
combustion surface consisting of two cylindrical surfaces
and four flat surfaces, which results in total decreasing
burning surface area.

The gas generator for the 122 mm base bleed projectile is


housed in the afterbody, whose cross is shown in cross in
Picture 5.

In experimental research [10], a series of measurements


of base bleed effect were performed for different flight
Mach numbers. Results are shown in Picture 6. Picture 6
shows reduction of the base pressure as a function of
dimensionless injection parameter I for different values of
flight Mach number. The parameter I is ratio of the bleed
mass flow rate and product of the base area with
freestream mass flow rate,

I=

m p
pV Abase

(1)

Conclusion was made that for most of the Mach numbers


there is an optimal value of I. This optimum value is
higher with decrease of the projectile flight Mach number.
According to the results of research [10], the semiempirical model was established, which allows to
determine optimum working regime of the gas generator,
in order to achieve reduction of the base drag coefficient.

Picture 5. Scheme of gas generator unit and propellant


grain

15

OTEH2016

EFFECTOFBASEBLEEDONTHEBASEDRAGREDUCTION

Picture 6. The experimental results representing of the effect of Mach number [10]

Picture 7. Required pressure vs. time curve to obtain desired optimum mass flow rate change.

Picture 8. Experimental chamber pressure vs. Time for gas generator 122 mm

16

OTEH2016

EFFECTOFBASEBLEEDONTHEBASEDRAGREDUCTION

The mass flow rate of combustion products bleeding


through the orifice by subsonic velocity is:

m p = I V Abase

(2)

where: V = M ( kRT ) 2
In dependency of projectiles trajectory altitude all
influential functions can be expressed in dependency from
time, using altitude y(t) and speed M(t) parameters: air
density (y) = (t) and temperature T(y) = T(t), Finally
optimal mass flow rate change in time can be determined
as:

m p ( t ) = I ( M ( t ) ) ( t ) kRT ( t )M ( t ) Abase

Picture 10. Effects of base bleed with m = 0.1231 kg/s


on drag coefficient

(3)

On the other hand required combustion products mass


flow rate exiting from gas generator, where is shown in
Picture 7, can be achieved by appropriate internal ballistic
design of gas-generator (GG). In dependency of thermochemical characteristics of propellant, and using gas
dynamics relations, the products mass flow rate as a
function of the pressure in combustion chamber is
obtained as:
m p ( p ) = 2 ( p ) V2 ( p ) A2

5. CONCLUSION
In the paper, numerical computations were performed for
projectile 122 mm at the different values of the Mach
number. The addition of base bleed with base cavity also
was computed. A series of calculation drag coefficient
was run for projectile 122 mm with and without base
bleed.
The Mach number range computed was 0.8<M<2 with an
angle of attack = 0. For the case without base bleed
was found a best turbulence model. The turbulence model
k- was selected for simulation projectile with gas
generator (base bleed).

(4)

Designed gas generator GG is tested in Military Technical


Institute Laboratory ''Technicum'' in Baric. According to
the experiments obtained curve of chamber pressure vs.
time shown in Picture 8 is accurate enough.

The influence of the base bleed flow effects on the drag


coefficient was calculated. The computed drag coefficient
of the projectile with and without base bleed showed a
reduction about 11%.

4. RESULTS
The zero yaw drag CX0 is compared first with remarkably
good agreement to semi empirical data when the k-
turbulence model is utilized, Picture 9. The k-kl- does
not do quite as well. However, the two-equation SST was
implemented with no significant differences noted with
the semi-empirical data [11].

This investigation will be continued through studying the


effect of base bleed on drag coefficient at different angles
of attack, the influence of the base bleed effects on the
lateral aerodynamic coefficients, and especially on the
dynamic derivatives and stability parameters.

A best model k- turbulence model will be selected for


simulation projectile with base bleed because it shows
good agreement with CX0.

NOMENCLATURE

Picture 9. Zero-yaw drag coefficient vs. Mach number


The effect of mass flow rate on drag coefficient at
different flight regimes is shown in Picture 10. It shows
that there is a reduction of about 11% in drag coefficient
in both the subsonic and the supersonic flow.

17

m p

- mass flow rate of combustion products.

Abase

- base area of projectile.

A2

- orifice area.

V2

- velocity of combustion products at section 2.

- freestream velocity.

- density of combustion products at section 2.

- freestream density.

- gas constant.

- temperature.

- ratio of specific heats.

- angle of attack.

Cdb

- base drag coefficient.

CX0

- zero yaw drag coefficient.

- dimensionless injection parameter.

OTEH2016

EFFECTOFBASEBLEEDONTHEBASEDRAGREDUCTION

- Mach number.

pb

- base pressure.

MTI

Supersonic Stream, Aeronautical Research Council


(Great Britain), Reports and Memoranda No. 3224,
Dec. 1959.
[6] Bowman,J.E., Clayden,W.A.: Cylindrical Afterbodies
in Supersonic Flow with Gas Ejection, AIAA
Journal, 5, (6), 1967, 1524,1525.
Turbulent
[7] Valentine,D.T.,
Przirembel,C.E.G.:
Axisymmetric Near-Wake at Mach Four with Base
Injection, AIAA Journal, 8, (12), 1970, 2279, 2280.
[8] Ansys Inc., ANSYS FLUENT 14.0 and GAMBIT 2.1
licensed to VTI, 2010
[9] Lee,Y.K., Kim,H.D., Raghunathan,S.: A Study of
Base Drag Optimization Using Mass Bleed, 15th
Australasian Fluid Mechanics Conference, 13-17
December 2004, Australia.
[10] Bowman,J.E., Clayden,W.A.: Reduction of base drag
by gas ejection, R.A.R.D.E report 4/64, 1969
[11] Jaramaz,S., Injac,M.: Methode of calculation range
of base bleed projectile (in Serbian), Military
Technical Institute, Belgrade, 1989.

- Military Technical Institute.

References
[1] Sahu,J. Nietubicz,C.J.: Navier-Stokes Computations
of Projectile Base Flow with and without Mass
Injection, AIAA Journal, 23(9), 1985, 1348-1355.
[2] Herein,J.L., Dutton,J.C.: Supersonic Base Flow
Experiments in the Near Wake of a Cylindrical
Afterbody, AIM Journal, 32, (1), 1994, 77-83.
[3] Herrin,J.L., Dutton,J.C.: Supersonic Near-Wake
Afterbody Boattailing Effects on Axisymmetric
Bodies, Journal of Spacecraft and Rockets, 31,
(6),1994, 1021-1028.
Preliminary
[4] Cortright,E.M.,
Schroeder,A.H.:
Investigation of Effectiveness of Base Bleed in
Reducing Drag of Blunt-Base Bodies in Supersonic
Stream, NACA RM E51A26, March 1951.
[5] Reid,J., Hastings,R.C.: The Effect of a Central Jet on
the Base Pressure of a Cylindrical Afterbody in a

18

DIVERGENCE ANALYSIS OF THIN COMPOSITE PLATES IN SUBSONIC


AND TRANSONIC FLOWS
MIRKO DINULOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, mdinulovic@mas.bg.ac.rs
ALEKSANDAR GRBOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, agrbovic@mas.bg.ac.rs
DANILO PETRAINOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, dpetrasinovic@mas.bg.ac.rs

Abstract: In the present paper the static aeroelastic phenomenon known as torsional divergence is investigated on rocket
stabilizers (fins) made of composite materials. Using analytic approach, the differential equation for torsional divergence of
composite trapezoidal stabilizer is derived. The equation obtained was in the form of second order differential equation with
variable coefficients. The solution to divergence equation was obtained using Galerkin's approach and the complete solution
procedure is presented. Required material elastic coefficients were calculated using micro-mechanics composite analysis for
lamina, and classical lamination theory (CLT) for complete stabilizer laminate lay-up.
It was found that Galerkin's approach can be successfully deployed in solving differential divergence equation and
divergence speed (VD) of composite stabilizer in subsonic air-flow can be effectively calculated.

1. INTRODUCTION
Static aeroelasticity investigates problems that involve the
interaction between steady-state aerodynamic forces and
flexible structure deformations. Lifting surfaces (airplane
wing, tail, rocket stabilizer...) exposed to air-flow may exhibit
instability that is known as torsional divergence. Torsional
divergence is static aeroelastic problem, and it results from
the fact that when the lifting surface is exposed to air-flow an
aerodynamic moment is generated. This aerodynamic
moment is proportional to the square of flight speed. On the
other hand, the structure elastic stiffness is independent of the
flight speed since it is inherent characteristic of the structure
itself. Based on this, it is obvious that a critical air-speed may
exist at which the structure elastic stiffness is not sufficient to
keep the lifting surface in a deformed position (as a result of
aerodynamic moment). Above critical air-speed even a small
deformation of a lifting surface leads to a large angle of twists
(therefore large strains and stresses) which may cause lifting
surface failure.

Figure 1. The effect of structure flexibility


Structural stiffness is relatively low for flexible plates and
shells which represent the basic building block of modern
flight structures. These structures are therefore more
susceptible to instabilities even at low flow velocities. In
this case, divergence and flutter can easily occur at
relatively low flow velocity
During the design phase of the lifting surfaces, it must be
insured that the structure is capable of sustaining applied
loads, it remains stable (divergent free) within whole range
of the flight envelope [1-4].

In aerodynamics lifting surface can be represented by an


object which transforms input parameters: velocity (V) and
angle of attack () into aerodynamic force and aerodynamic
moment, whereas in the aeroelasticity the feedback loop
exists, where, is through the elasticity of the structure, the
value of the of the angle of attack () is transformed, by
addition of the structural angle of twist ( ) (Figure 1).

2. COMPOSITE FIN TORSIONAL


DIVERGENCE SPEED
Let us assume that the lifting surface has variable
aerodynamic, geometric and mechanical parameters along
the span as presented in the following figure (Figure 1).

19

OTEH2016

DIVERGENCEANALYSISOFTHINCOMPOSITEPLATESINSUBSONICANDTRANSONICFLOWS

dM T ( y ) 1
dC ( y )
+ V 2 z

dy
d
2
[ ( y ) + ( y )] e ( y ) l 2 ( y )
+ 1 V 2 Cmac ( y ) l 2 ( y )
2
n g ( y ) h ( y ) = 0

(5)

Using the relation between applied torque or moment of


torsion (MT), angle of twist () shear modulus (G) and the
torsion constant (J), the change of the angle of twist along
the fin span may be expressed as:
d = M T
dy GJ

Figure 2. Differential element of the fin and forces


distribution

Differentiating previous relation in respect to span


coordinate y one obtains:

Differential lifting aerodynamic force, denoted as dRz(y)


acts in the aerodynamic center of the differential element of

dM T
= d GJ d =
dy
dy
dy
2
d
d

GJ ( y ) 2 +
d ( GJ ( y ) )
dy dy
dy

the fin. The area of the differential element is l ( y ) dy .


Assuming the quasi-steady flow around the stabilizer fin the
lifting aerodynamic force on the fin differential element is
therefore,
dRz ( y ) = 1 V 2
2

dC z ( y )

[ a ( y ) + ( y )] l ( y ) dy

l ( y ) dy l ( y )

(1)

+ d ( GJ ( y ) )

dy 2 dy

[GJ ( y )] d

(2)

element mass per nit span.


In the shear center, on one side of the differential element
the torsion moment M T ( y ) is applied, and on the other side
the torsion moment, with corresponding gradient is:
(3)

GJ = const, l = const , e = const, = const,

(9)

Equilibrium condition of the differential fin element, about


the shear center, requires that the moment equation is
satisfied in the following form:
MT ( y ) +

dCz
= const, Cmac = const, h = const, g = const.
d

dM T ( y )
dy M T ( y ) +
dy

+ dRz ( y ) e ( y ) l ( y ) +

(8)

This equation represents a linear non-homogeneous secondorder equation with variable coefficients. The objective is to
solve the previous equation for the angle of twist () along
the fin span as a function of quasi steady flow speed (V) and
determine the extreme values of twisting angle which
correspond to divergence speed (VD). However, due to its
complexity, finding the analytical solution to previous
differential equation is not possible for the real lifting
surface geometries used nowadays. Only in cases where,

acts, where n is load factor, and g ( y ) is the differential

dM T ( y )
dy
dy

d + 1 V 2 dCz ( y ) l 2 ( y ) e ( y )

dy 2
d

2 dC z ( y )
2
1
= V
( y ) l ( y ) e ( y )
d
2
1 V 2 Cmac ( y ) l 2 ( y ) +
2
n g ( y) h( y)

Also in the center of gravity differential inertia force ng ( y )

MT ( y ) +

(7)

Therefore, the equilibrium condition of the differential fin


element about the shear center is:

The differential value of the aerodynamic moment is:


dM ac ( y ) = 1 V 2 Cmac ( y )
2

(6)

which represents the rectangular surface with constant


parameters along the span (y), finding the analytic solution
to the derived differential equation is possible, and it has
been investigated by many authors. The solution for the
divergence speed (VD) is:

(4)

+ dM ac ( y ) n g ( y ) h ( y ) dy = 0

VD =

Substituting the relations for the quasi-steady differential


aerodynamic force and differential aerodynamic moment in
the previous equation, the equilibrium condition becomes:
20

2GJ 2
dC
z l 2 eb 2
d

(10)

OTEH2016

DIVERGENCEANALYSISOFTHINCOMPOSITEPLATESINSUBSONICANDTRANSONICFLOWS

However, for the more complex geometries and variable fin


parameters along the span, solution to the differential
equation in hand, has to be sought by deploying numerical
approach. In this paper the solution of the equation is found
by Galerkin method [5]. The Galerkin method is weighted
residual methods, where weight function functions are

2y
,
b

2
2y
2 ( y ) = y 1 ,....,
b

N
2y
N ( y ) = y 1
b

1 ( y ) = y 1

chosen from the basis functions w( x) {i ( x )}i =1 and it is


n

required that the following n equations hold true:

It is obvious that functions (eq. 17) satisfy required initial


conditions (eq.16)

( x ) ( L [u ( x )] + f ( x )) dx = 0 i = 1, 2,..., n
i

(10)

( b)

k ( 0 ) = 0 1 0

(11)

dC ( y )
B ( y, V ) = 1 V 2 z
( y ) l 2 ( y ) e ( y )
d
2

k 1

(19)

Substituting the assumed solution in the differential


equation on hand, one may obtain:

Where, by comparison one may conclude that in the


previous equation variable coefficients are:

dC ( y ) 2

l ( y ) e ( y )
A ( y, V ) = 1 V 2 z
d

(18)

2 (b 2)

2 (b 2)
2 (b 2)
1 b k b = 0

k ( b 2 ) = 1

2
F ( y ) d 2 + F ( y ) d +
dy
dy

F ( y ) = GJ ( y ) ,

= 0,

( )

The differential equation for the equilibrium condition of


the differential fin element about the shear center (eq. 8),
derived in the previous section of this paper, for
convenience can be expressed as:

F ( y ) = d ( GJ ( y ) )
dy

2y
+
b

k 1
2y
y k 1 2
b
b

k ( y ) = 1

3. SOLUTION OF THE DIVERGENCE SPEED


DIFFERENTIAL EQUATION BY
GALERKIN`S METHOD

A ( y, V ) = B ( y, V )

(17)

F ( y)

i =1

A ( y, V )

(12)

aii( y ) + F ( y )

a ( y) +
i i

i =1

a ( y ) = B ( y, V )

(18)

i i

i =1

Using Galerkin's method for error minimization of assumed


solution for the differential equation (eq. 8) it follows that:

(13)

b2

[ F ( y ) ( y ) + F ( y ) ( y ) + A ( y,V ) ( y )]
ai

i =1

j ( y ) dy =

(14)

1 V 2 Cmac ( y ) l 2 ( y ) + n g ( y ) h ( y )
2

(19)

b2

B ( y,V ) ( y ) dy
j

Assuming the solution in the form:

( y) =

With following notation:

a ( y )
i

(15)

b2

Aij (V ) =

i =1

[ F ( y ) ( y ) + F ( y ) ( y ) + A ( y,V ) ( y )]
i

(20)

j ( y ) dy

And functions i ( y ) are chosen to satisfy initial


conditions:

b2

i ( 0 ) = 0, i ( b 2 ) = 0

B j (V ) =

(16)

B ( y,V ) ( y ) dy
j

(21)

In General, there are many functions that satisfy these


conditions. In a particular problem following functions are
chosen:

Previous equations assume the form of linear equations:


N

A (V ) a = B (V )
ij

i =1

21

(22)

OTEH2016

DIVERGENCEANALYSISOFTHINCOMPOSITEPLATESINSUBSONICANDTRANSONICFLOWS

If the value for the determinant is not equal to zero new


flight speed V2 is assumed and coefficients and determinant
are recalculated. Based on new values it can be verified
whether the determinant has changed the sign suggesting
that the determinant zero value (condition given by eq. 28)
lies between the two guesstimate flight speeds.

Coefficients are functions of flight speed (V). The system of


equations can be expressed in the following form:
A11 (V ) a1 + A12 (V ) a2 + ....
+ A1N (V ) aN = B1 (V )

A21 (V ) a1 + A22 (V ) a2 + ....


+ A2 N (V ) aN = B2 (V )

(23)

4. NUMERICAL EXAMPLE
The solution to the lifting surface torsional divergence
problem, presented in previous section is illustrated on a
trapezoidal plate, with cross section in a form of an airfoil.
The lifting surface geometry and airfoil data is given in the
following picture.

AN 1 (V ) a1 + AN 2 (V ) a2 + ....
+ ANN (V ) aN = BN (V )

In order to find the solution to the above system of linear


equations Cramer's rule can be used. Forming the
determinants:

A11 A12
A21 A22

D (V ) =

AN 1 AN 2

A1N
A2 N



ANN

(24)
A11 A12
A21 A22

Di (V ) =

AN 1 AN 2

B1
B2



Bn

A1N
A2 N



ANN

Figure 2. Lifting surface geometry and control sections


locations
Table 1. NACA 006 airfoil data

The required coefficients can be determined as follows


(eq.25):
ai (V ) =

Di (V )
D (V )

(25)

It has to be emphasized that all parameters in previous


equations are functions of quasi steady flow speed (V).
The final solution can be written in the following form:

( y, V ) =
1
D (V )

a (V ) ( y ) =
i

i =1

D (V ) ( y )
i

(26)

i =1

Critical divergence speed is defined that the angle of twist


( ) at that particular speed becomes infinitely large

( y,VD ) .

0.00000
-0.00947
-0.01307
-0.01777
-0.02100
-0.02341
-0.02673
-0.02869
-0.02971
-0.03001
-0.02902
-0.02647
-0.02282
-0.01832
-0.01312
-0.00724
-0.00403
-0.00063

Table 2. Fin material data

(27)

AS4 Carbon fiber /3501 6 Epoxy matrix (UD) 3501-6 Epoxy

It may be concluded that this is satisfied when:


D (V ) 0 .

NACA 006 airfoil data


0.00063
0.00000
0.00403
0.01250
0.00724
0.02500
0.01312
0.05000
0.01832
0.07500
0.02282
0.10000
0.02647
0.15000
0.02902
0.20000
0.03001
0.25000
0.02971
0.30000
0.02869
0.40000
0.02673
0.50000
0.02341
0.60000
0.02100
0.70000
0.01777
0.80000
0.01307
0.90000
0.00947
0.95000
0.00000
1.00000

1.00000
0.95000
0.90000
0.80000
0.70000
0.60000
0.50000
0.40000
0.30000
0.25000
0.20000
0.15000
0.10000
0.07500
0.05000
0.02500
0.01250
0.00000

(28)

Therefore, in an engineering application of method


presented, the critical divergence speed may be calculated
by assuming certain flight speed V1, and based on this value
coefficients Aij (V1 ) and determinant D (V1 ) are calculated.
22

Vf

0.63

1.58

E11

E22

G12

12

142

10.3

7.2

0.27

F1t

F1c

F2t

F2c

F6

2280

1440

57

228

71

DIVERGENCEANALYSISOFTHINCOMPOSITEPLATESINSUBSONICANDTRANSONICFLOWS

Table 3. Fin Geometric characteristics

5. CONCLUSION

J[m x Xcg
Es
a.c
e
10-10] [m]
0 0.07624 11.3 0.0320 0.0279 0.0191 0.0088

25 0.06988

8.01 0.0294 0.0255 0.0175 0.0080

50 0.06353

5.45 0.0267 0.0232 0.0159 0.0073

75 0.05718

3.59 0.0240 0.0209 0.0143 0.0066

100 0.05083

2.25 0.0213 0.0186 0.0127 0.0059

section %

L[m]

In this paper the solution for torsional divergence speed is


presented. This type of static aeroelastic phenomenon can be
mathematically expressed with linear non-homogeneous
second-order differential equation with variable coefficients,
that was derived in section of this paper. Analytic solution
can not be found in the closed form for this type of problem
in cases when the geometry of the lifting surface is
relatively complex and material is not isotropic.

Table 4. Fin Geometric characteristic variables for


divergence calculation
L
J
Xcg
Es
a.c
e

OTEH2016

In order to solve torsional divergence equation for complex


lifting surface geometries and orthotropic materials
(composites), numerical approach using Galerkins method
was used. It was found that this solution method can be
successfully applied. Numerical example is presented and
using this method the divergence speed is calculated for the
lifting surface in the form of composite stabilizer fin.

setion span
section polar moment of inertia
location of sectional center of gravity
location of sectional shear center
aerodynamic center position
distance between shear center and aerodynamic center
expressed in percent of span length.

The solution found represents theoretical divergence speed


value (i.e the the speed at which the lifting surface becomes
unstable or divergent). However, the results obtained are in
practice non-realistic (rotation of the tip section lifting
surface are two high). These results were expected since the
structural failure criteria were not used. Incorporating the
material failure criteria into the divergence speed calculation
method proposed is the course for the future work in this
area.

One of the required parameters in torsional divergence


equation, derived in previous section is polar moment of
inertia. For the fin (lifting surface) analyzed in this example
it is clear that the values for the polar moment of inertia
change along the span. Calculating moments of inertia for
all control sections (1 to 5, Table 3), the function that best
fits was expressed in the following form:
J = 5 1014 y 2 1.4 1011 y + 1.1 109

(29)

References
[1] Librescu,L., Maalawi,K.Y.: Aeroelastic design
optimization of thin-walled subsonic wingsagainst
divergence, Thin-Walled Structures 47 (2009) 89-97.
[2] Kirch,A., Clobes,M., Peil,U.: Aeroelastic divergence
and flutter: Critical comments on the regulations of EN
1991-1-4, Journal of Wind Engineering and Industrial
Aerodynamics, Volume 99, Issue 12, December 2011,
Pages 1221-1226.
[3] Mahran,M., Negm,H., El-Sabbagh,A.: Aero-elastic
characteristics of tapered plate wings, Finite Elements
in Analysis and Design, Volume 94, February 2015,
Pages 24-32.
[4] Kameyama,M., Fukunaga H.: Optimum design of
composite plate wings for aeroelastic characteristics
using lamination parameters Computers & Structures,
Volume 85, Issues 34, February 2007, Pages 213-224.
[5] Feng,X., Lewis,T., Neilan,M.: Discontinuous Galerkin
finite element differential calculus and applications to
numerical solutions of linear and nonlinear partial,
differential equations, Journal of Computational and
Applied Mathematics, Volume 299, June 2016, Pages
68-91.

Figure 3. Section rotation along fin span for 500 m/s


supersonic flow

Figure 4. Fin tip section rotation as a function of flow


speed

23

AEROACOUSTIC ANALYSIS OF A JET NOZZLE


TONI IVANOV
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, tivanov@mas.bg.ac.rs
VASKO FOTEV
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, vfotev@mas.bg.ac.rs
NEBOJA PETROVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, npetrovic@mas.bg.ac.rs
ZORANA TRIVKOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, ztrivkovic@mas.bg.ac.rs
DRAGAN KOMAROV
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, dkomarov@mas.bg.ac.rs

Abstract: Computational fluid dynamics (CFD) based on the finite volume method were used in order to obtain the aerodynamic characteristics of a jet nozzle. After the transient CFD results were gained the Ffowcs Williams-Hawking formulation
for far-field noise modeling was used for aeroacoustic computation. A grid sensitivity was done and different turbulence
models were tested. The obtained results were then compared with publicly available experimental data after which a conclusion was derived.
Keywords: computational aeroacoustics, computational fluid dynamics, jet noise.

1. INTRODUCTION

Nowadays most researchers use in-house codes for the large


eddy simulation (LES) or detached eddy simulation (DES)
of the flow field and the Ffowcs Williams-Hawking (FW-H)
formulation for the far-field noise prediction [1-3]. Also
researchers have made significant efforts in order to implement the Unsteady Reynolds Averaged Navier-Stokes
(URANS) solvers with the use of different turbulence
viscosity models for predicting jet noise [4-7].

One of the most important issues in the air traffic industry


today is the aircraft noise. Many governments have limited
the allowable noise level in and around airports through
regulations and strict requirements for aircraft external noise
certification.
Jet noise is an issue in military aircraft as well. Military aircraft
have engines with very low bypass ratios and high exit temperatures and velocities due to which they have very loud noise
characteristic that can pose annoyance to communities near
military bases as well as a health threat to ground crews.

In this paper aeroacoustic analysis of the SMC000 nozzle


were done by the use of the two dimensional axisymmetric
URANS with the FW-H methodology for the far field noise
prediction.

Although in recent years aircraft engines have been


significantly improved, the rise in air traffic mandates that the
noise generated by aircraft engines needs to be further lowered.
Main source of the overall aircraft noise is the jet noise produced by the jet engine. This is especially noticeable during takeoff when the engines work at maximal power.

2. MATHEMATICAL BACKGROUND
Most of the modern models for aeroacoustic jet noise prediction are based on the acoustic analogy of Lighthill [8,
9].This is a two step analogy that separates the sound generation induced by the fluid flow and the sound propagation
in an acoustic medium. In this analogy it is shown that
aerodynamic sound is a consequence of turbulence which
provides quadrupole source distribution in ideal gas at rest.
Hence the equations of the propagation of sound in uniform
medium at rest due to externally applied fluctuating stresses
are derived by the momentum and mass conservation
equations and are given as:

In order to better predict the generated jet noise researchers


have been using different approaches over the years.
Recently computational fluid dynamics has been extensively
used for noise prediction. One of the methods used for noise
calculations is the direct numerical simulation (DNS) of the
aeroacoustic noise. This method although most accurate
requires that the sound field is calculated all the way from
the sound source to the receiver. This makes it highly
computationally demanding and impractical for noise
estimates in the far field. Another approach is to use CFD
for the flow field and some acoustic analogy for the noise
prediction in the far field.

2Tij
2
2 2

=
0
xi x j
t 2

24

(1)

OTEH2016

AEROACOUSTICANALYSISOFAJETNOZZLE

where Tij is the instantaneous applied stress at any point


and is often called the Lighthillian acoustic tensor:

Tij = ui u j + pij a02 ij

2
1 p 2 p = 2 T H ( f )
{
}
2
xi x j ij
a0 t 2

(2)

{[ pij n j + ui ( un vn )] ( f )}
xi

The computation of the far sound can then be calculated


with the integral:
2
p ( x, t ) = 1
4 xi x j

x 1 y T y, t
ij

xy
a0

+ {[ 0 vn + ( un vn )] ( f )}
t

3
d y (3)

where the function f=0 denotes a mathematical surface used


to embed the exterior flow problem (f >1) in an unbounded
space. H(f) is the Heaviside function and (f) is the Dirac
function, ui, vi, un and vn are the fluid velocity and surface
velocity components in the xi direction and normal to the
surface f=0 respectively. p` is the sound pressure at the far
field.

Lighthill then used dimensional analysis to derive a law for


scaling the total acoustic power of the jet with the jet speed.
The acoustic power from isothermal subsonic jets is:
P = K 0u 8 a0 5 d 2

(4)

This equation can be integrated analytically under the assumption of free space flow and absence of obstacles
between the sound sources and the receivers. The solution is
consisted of surface integrals which contribute for the monopole, dipole and partially from quadrupole sources and
volume integrals which represent quadrupole sources outside the source surface.

where K is a proportionality constant called acoustic power


coefficient.
The acoustic power formulation given in Eq. 4 applies only
to stationary sources. Having in mind that quadrupoles are
convecting downstream the effect of moving sources is accounted for by a convection factor.
P = K 0 u 7.5 a0 5 d 2 C 5

Qi ni

dS
f = 0 4 x y
e
Lij n j

dS

xi f = 0 4 x y
e
2
Tij

dV
+
xi x j f > 0 4 x y
e

p ( x, t ) =
t

(5)

1/2

Where [ ]e denotes evaluation at emission time e. The source terms under the integral sign are Qi = (ui vi) + 0vi
and Lij = ui(uj vj) + pij. The third integral is evaluated in
the region outside the surface f >0 in order to account for
sources outside the FW-H surface. Since volume integrals
become small when the source surface encloses the source
region they can be dropped. It should be mentioned that the
time t in Eq. (10) corresponds to the observer time, that is
the time when the observer at position x perceives the pressure fluctuations, and is different from the emission time e.

(7)

is defined as:

2 =

2f L2
a02

= const

(10)

(6)

where Mc is the convection Mach number, the angle from


the jet axis, and accounts for the decay time of the eddies.
Mc in stationary ambient may be approximately related to
the Mach number:
M c = 0.55M j

The convection factor is given as:


2
C ( M c , ) = (1 M c cos ) + 2 M c 2

(9)

Since there are three characteristic problem types in design


procedures, considering the observer and the ambient fluid,
the presence of both temporal and spatial derivatives can
make the FW-H equation difficult for numerical implementation. In order to tackle this issue over the years different
formulations of the FW-H analogy that are better suited for
numerical implementation were introduced [12].

(8)

where f represents the characteristic frequency and L the


length scale of the eddies.
Lighthills theory is not valid for supersonic Mach numbers.
In fact through observing experimental data it can be seen
that for M=1 to 1.5 there is a u6 dependency of the sound
power level. For M up to 2.5 this dependency is of the order
u4 and for M larger than 2.5 of the order u3.

3. NUMERICAL SETUP
In this paper the two inch SMC000 nozzle geometry was
considered (Fig. 1). This geometry has been extensively
tested over the years [13-15]. The nozzle geometry consists
of a diameter base with six inch inlet (left of the arrow)
which is used for different nozzle cones and a cone with 2
inch outlet (right of the arrow). In this paper only the cone
part is modeled in order to reduce the number of cells in the
grid and since the inside flow is not of interest.

More suitable approach for high speed jets acoustic noise is


given by Ffowcs Williams [10].
The Lighthill analogy doesn't account for interaction with
surfaces but the sound is only a result of turbulence quadrupoles. This was solved by Ffowcs Williams and Hawkings
[11]. The FW-H equation can be written as:

25

OTEH2016

AEROACOUSTICANALYSISOFAJETNOZZLE

quadrilateral grid with 627 cells in the longitudinal and 327


in the lateral direction was chosen for the rest of the computations (Fig.4). The dimensionless wall distance on the
nozzle walls was set to y+ < 1.

Figure 1. SMC000 nozzle geometry [13]

Since aeroacoustic computations are computationally very


demanding the SMC000 geometry is going to be represented via an axisymmetric model. Axisymmetric models are
not used very often for aeroacoustic calculations for they
cannot completely accurately resolve the turbulence which
is the main source of noise in jets but efforts in this direction
have been made since the computational time necessary is
significantly smaller than that of the three dimensional models [16]. In figure 2 the computational domain with the
FW-H surface used is shown.

Figure 4. Computational grid

The boundary condition values that were used for the computations are given in Table 1 where NPR is the nozzle
pressure ratio and the Mj is the jet Mach number, T0 is the
ambient temperature and p0 the ambient pressure.
Table 1. Boundary condition values
NPR
po [Pa]
T0 [K]
1.86
100 000
300

Mj
0.98

The Ansys FLUENT 16.2 commercial CFD flow solver was


used for the computations. The pressure based axisymmetric
transient solver with the pressure-velocity coupled scheme
was used. Both the temporal and spatial discretization were
of the second order.
The time step size needed to be set as to be able to capture
the highest frequency of 100 kHz and according to the
equation t=1/(nf) where t is the time step size, f is the
desired frequency and n is the nth sample that is read in the
acoustic module.
Figure 2. Computational domain

In order to ensure that statistically stationary state solution


was achieved the values in few points were monitored and
the transient calculations were done for at least 5000 time
steps after which the acoustic data on the FW-H surface was
gathered for at least another 5000 time steps.

Grid sensitivity study for the steady flow using the k- turbulence viscosity was done. Three different grids sizes have
been considered. The coarsest grid had 45434 cells, the medium one 172904 and the finest grid had 363312 cells.
Comparison of the centerline velocities on the jet axis obtained by the different grids is given in Fig. 3.

4. RESULTS
The velocity decay along the jet centerline obtained with the
different turbulence models is given in Fig.5.

Figure 3. Centerline velocity decay obtained with the 3


different grid sizes and the k- turbulence model

As it can be seen the medium and fine grid both give very
close results so the medium grid which is a structured

Figure 5. Centerline velocity decay obtained with different turbulence models


26

OTEH2016

AEROACOUSTICANALYSISOFAJETNOZZLE

It can be seen that the k- turbulence model gives the most


accurate result in comparison with the experimental results
given in [13]. The SA turbulence model best predicts the
point of the start of the decay but largely overestimates the
decay rate.
In figure 6 the turbulent kinetic energy (TKE) obtained with
the k-, k- SST and the transitional SST turbulence models
is shown.

Figure 9. Observer positions

The sound pressure level (SPL-Lsp) is calculated with the


equation Lsp = 10(log(PSD/pref2)), where PSD is the power
spectral density and pref is reference acoustic pressure and
has value of 2x10-5 [Pa].
In figure 10 a comparison between the sound pressure levels
for the 90 degree observer position obtained by the different
turbulence models is shown.

Figure 6. TKE values along jet centerline

In figures 7 and 8 the axial velocity and the turbulent kinetic


energy contours are shown. The images were mirrored over
the x axis in order to show the complete flow field. These
values are in good agreement when compared to PIV experiments data shown in [5].

Figure 10. Sound pressure levels for 90 degree observer


by different turbulence models

The calculated maximum sound pressure level for all turbulence models is a little larger than the experimental values while
the peak frequency and the SPL in regards to frequency in general differs a lot from the experimental values. There is a rapid
decay in the sound pressure level as the frequency increases
which is completely different from experimental results.

Figure 7. Axial velocity [m/s] contours

In figure 11 the overall sound pressure levels (OASPL) at


different observer positions for the k- and SA turbulence
models in comparison with experimental data are shown.

Figure 8. TKE [m2/s2] contours

The sound pressure levels were obtained for five different


observer locations on a 50 nozzle diameters (100 inch) radius from the nozzle exit. The observer positions in regards to
the nozzle exit are given in Fig 9.These positions were set in
such manner as to replicate the microphone positions in the
experimental investigations done in [13,14].

Figure 11. OASPL at different observer positions


27

OTEH2016

AEROACOUSTICANALYSISOFAJETNOZZLE

Simulation, 47th AIAA Aerospace Sciences Meeting, Orlando, Florida, AIAA Paper 2009-14, (2009), pp. 1-25
[6] Engel, R., C., Silva, C., R. and Deschamps, C., J.: Application of RANS-based method to predict acoustic
noise of Chevron nozzles, Applied Acoustics, 79 (2014),
pp. 153-163
[7] Benderskey, L., A. and Lyubimov, D.,A.: Investigation
of Flow Parameters and Noise of Subsonic and Supersonic Jets Using RANS/LES High Resolution method,
29th Congress of the International Council of the Aeronautical Sciences, St. Petersburg, Russia, September
2014
[8] Lighthill, M., J.: On Sound Generated Aerodynamically. I. General Theory, Proceedings of the Royal Society of London Series A, Mathematical and Physical
Sciences, 211, 1107 (1952), pp. 564-587
[9] Lighthill, M., J.: On Sound Generated Aerodynamically. II. Turbulence as a Source of Sound, Proceedings
of the Royal Society of London Series A, Mathematical
and Physical Sciences, 222, 1148 (1954), pp. 1-32
[10] Ffowcs Williams, J., E.: The Noise from Turbulence at
High Speed, Philosophical Transactions of the Royal
Society of London Series A, Mathematical and Physical
Sciences, 255, 1061 (1963), pp. 469-503
[11] Ffowcs Williams, J., E. and Hawkings, D., L.: Sound
Generation by Turbulence and Surfaces in Arbitrary
Motion, Philosophical Transactions of the Royal Society of London, Series A. Mathematical and Physical
Sciences, 264, 1151 (1969), pp. 321-342
[12] Najafi-Yazdi, A., Bres, G.,A. and Mongeau, L.: An
Acoustic Analogy Formulation for Moving Sources in
Uniformly Moving Media, Proceedings of the Royal
Society Series A, 467, (2011), pp. 144-165
[13] Brown, C. and Bridges, J.: Small Hot Jet Acoustic Rig
Validation, NASA/TM-2006-214234 Report, April 2006
[14] Froening, L., V., et.al. : Experimental Investigation of
Turbulent Jet Flow From a Chevron Nozzle, 22nd International Congress of Mechanical Engineering COBEM
2013, Brazil, November 2013
[15] Bridges, C., A. and Brown, C.: Parametric Testing of
Chevrons on Single Flow Hot Jets, NASA/TM-2004213107 Report, September 2004
[16] Koch, D., L., Brigdes, J. and Khavaran, A.: Flowfield
Comparisons From Three Navier-Stokes Solvers for an
Axisymmetric Flow Jet, NASA/TM-2002-211350 Report, February 2012

It is seen that the overall acoustic sound pressure level is


largely overestimated while the increase of the OASPL with
the decrease of the observer angle is slightly underestimated
by both the k- and the SA model.

5. CONCLUSION
Two dimensional axisymmetric analysis of the aeroacoustic
noise generated by the SMC000 axisymmetric two inch
nozzle was done. It was shown that although the URANS
give good results for the velocity decay and the turbulence
kinetic energy there is a large discrepancy in the sound pressure level estimation. This is probably due to the fact that
the axisymmetric model is not able to fully resolve the turbulence especially the large scale turbulence at larger downstream distances. Also adding a buffer zone on the right
exit of the computational domain may improve the SPL
estimate.
Tide and Babu [5] did 3D URANS analysis of the SMC000
on a 30 degree pie section and were still not able to accurately predict the peak frequency.
It is imposed that a full three dimensional model should be
used for accurate computation of the peak frequency.

ACKNWOLEDGMENT
The research work is funded by Ministry of Science and
Technological Development of Republic of Serbia through
Technological Development Project No. 35035.

References
[1] G., A., Faranosov, et. al.: CABARET method on unstructured hexahedral grids for jet noise computation,
Computers & Fluids, 88 (2013), pp. 165-179
[2] Morris, P., J., Du, Y. and Kara, K.: Jet Noise Simulations for Realistic Jet Nozzle Geometries, Procedia Engineering, 6 (2010), pp. 28-37
[3] Uzun, A. and Hussaini, M., Y.: High-Fidelity Numerical Simulations of a Round Nozzle Jet Flow, 16th
AIAA/CEAS Aeroacoustics Conference, AIAA Paper
2010-4016, (2010), pp. 1-16
[4] Kenzakowski., D., C. and Kannepalli, C.: Jet Simulation for Noise Prediction Using Advanced Turbulence
Modeling, 11th AIAA/CEAS Aeroacoustics Conference,
AIAA Paper 2005-3086, (2005), pp. 1-18
[5] Tide, P.S. and Babu, V.: Aerodynamic and Acoustic
Predictions from Chevron Nozzles Using URANS

28

NUMERICAL AND EXPERIMENTAL INVESTIGATION OF


AERODYNAMIC CHARACTERISTICS OF SPIN STABILIZED
PROJECTILE
DAMIR D. JERKOVI
University of Defense in Belgrade, Military Academy, damir.jerkovic@va.mod.gov.rs
ALEKSANDAR V. KARI
University of Defense in Belgrade, Military Academy, aleksandar.kari@va.mod.gov.rs
NEBOJA HRISTOV
University of Defense in Belgrade, Military Academy, nebojsahristov@gmail.com
SLOBODAN S. ILI
University of Defense in Belgrade, Military Academy, slobodan.ilic@vs.rs
SLOBODAN SAVI
University in Kragujevac, Faculty of Engineering Sciences, ssavic@kg.ac.rs

Abstract: The paper presents the numerical and experimental research and the analysis of the aerodynamic coefficients
of the spin stabilized projectile. The numerical prediction method of aerodynamic coefficients is performed with CFD
(Computational Fluid Dynamics) steady RANS (Reynolds Averaged Navier Stokes equation) method, four different
models of turbulence and three different types of mesh. The semi-empirical methods are performed to predict the values
of aerodynamic coefficients and derivatives. Experimental investigation is performed through aerodynamic wind tunnel
tests and ballistic proving ground tests. The analyses of static aerodynamic coefficients are performed for subsonic,
transonic and supersonic flow for different numerical and experimental research. The experimental proving ground test
investigations are done using 3D ballistic radar for transonic and supersonic flight Mach numbers. The comparison of
the numerically predicted values of the aerodynamic characteristics is accomplished through the 6-DoF flight model of
40 mm model of projectile in relation to the experimental results. The performed numerical techniques and methods on
the structured type of mesh coupled with SST k- turbulence model are generated wide and qualitative aerodynamic
description for projectile flight dynamic modeling, according to acquired experimental flight test results on the proving
ground.
Keywords: aerodynamic coefficient, RANS, SST k-, experimental aerodynamic measurement, ballistic radar
measurement
calculated values in relation to the test values. The
significance of the accurate prediction is that the
symmetric projectile with initial velocity as the main
energy resource, flies to target and the main influence is
the air drag, depending on the flow regimes according to
geometric parameters and boundary conditions.

1. INTRODUCTION
The accuracy and precision of a flight dynamic system
depends on the proper model and the experimental results.
The projectile as a flight dynamic system with the specific
geometric and dynamic characteristics has to save the
initial energy, during the flight through the atmosphere.
The optimal aerodynamic shape of the projectile provides
stable flight, decreasing drag and preserving velocity.

The projectile is assumed to be either a body of revolution


whose spin axis coincides with a principal axis of inertia,
or a finned missile with three or more identical fins
spaced symmetrically around the circumference of a body
of revolution. In addition to the requirements of
configuration and mass symmetry, the projectile is also
restricted to small yaw flight along its trajectory. In
conventional aircraft aerodynamics, the terms pitch or
angle of attack refer to the aircrafts nose pointing
above or below its flight path; the terms yaw or angle
of sideslip refer to the nose pointing to the left or right of
the flight path, [1].

The classic spin stabilized projectile, observed in the


research, as symmetric solid body, is consisted of the
front part, nose (ogive shape), the middle cylindrical part
(added rotating band) and the rear part, boat-tail
(truncated cone shape). The specific dimensions and
construction of the projectile determine the specific
physical effects of the air flow. The main task is to
determine influence of air flow to the projectile with
adequate aerodynamic flow model and to verify
29

NUMERICALANDEXPERIMENTALINVESTIGATIONOFAERODYNAMICCHARACTERISTICSOFSPINSTABILIZEDPROJECTILE

In the paper, the numerical research of the aerodynamic


coefficients of axial force is described. The numerical and
semi-empirical prediction models of the aerodynamic
coefficients are provided by the different techniques and
methods, according to the flow regime, and took into
account the influence of aerodynamic parameters and
their interaction.

OTEH2016

3. THE EXPERIMENTAL RESEARCH


The series of wind tunnel tests of the projectile model of
40 mm are performed in the T-38 wind tunnel [2,3] of the
Military Technical Institute in Belgrade (VTI). The series
of proving ground ballistic tests are performed in the
facility of the Technical Test Center of Serbian Armed
Force (TOC).

2. AERODYNAMIC MODEL OF THE


PROJECTILE

3.1. The wind tunnel tests


Projectile model of 40 mm mounted in the T-38 test
section is shown in Picture 2.

The aerodynamic axial force opposes the forward velocity


of the projectile and that is the classical aerodynamic
force of exterior ballistics as the air resistance or
drag. The aerodynamic force acting on projectile in the
center of pressure is given by, [1,2],

X = q S Cx

(1)

where are,

q =
S=

V2
2

d2
4

, Dynamic Pressure,

, Reference Area of the Projectile,

C x , Aerodynamic Coefficient of Axial Force,


Picture 2. Projectile in the wind tunnel test section, [2]

, Air Density of the Free Stream and,

The T-38 test facility of Military Technical Institute in


Belgrade is a blow down type pressurized wind tunnel
with a 1.5m x 1.5m square test section [2,3].

V , Free Stream Velocity.


The axial aerodynamic coefficients, representing
aerodynamic force depends on airflow parameters (Mach
number, Reynolds number), aerodynamic velocity and the
angle of attack.

The wind tunnel tests of the model are performed in the


Mach number range from 0,2 to 3,0. The angle of attack
was in the interval from 10 degrees to +10 degrees and
roll angle was 0 degrees. The instrumentation and data
acquisition system are used within VTI facility. The data
reduction is performed after each run, using the standard
T38-APS software package in use with the wind-tunnel
facility. The reduction is done in several stages, [2,3]:

The component of axial aerodynamic force X, and drag


force D, are presented in Picture 1,
X Axial Aerodynamic Force,
D Drag Aerodynamic Force,
x

Data acquisition system interfacing and signals


normalization,

Determination of flow parameters in the test section


of the wind tunnel,

Determination of model position (orientation)


relative to test section and airflow,

Determination of non-dimensional aerodynamic


coefficients of forces and moments.

Picture 1. The Aerodynamic Force on Projectile

The stagnation pressure p0 in the test section is measured


by a Mensor quartz bourdon tube absolute pressure
transducer pneumatically, connected to a Pitot probe in
the settling chamber of the wind tunnel. The range of the
used transducer was 7105 Pa. The difference between the
stagnation and static pressure (pst -p0) in the test section is
measured in subsonic/transonic flow regime by a Mensor
quartz bourdon tube differential pressure transducer,
pneumatically connected to the p0 Pitot probe and to the
orifice on the test section sidewall. In transonic and
supersonic flow regimes, the absolute pressure transducer

The axial aerodynamic force coefficient is given by (2)


and depends on Mach number and the angle of attack ,
[1,2]. The force represents the main component of the
total aerodynamic force (Drag),
C x ( Ma ) = C x 0 ( Ma ) + C x 2 ( Ma )

(2)

The aerodynamic axial coefficient Cx ( Ma ) depends on


Mach number (e.g. Reynolds Number), according to the
geometry parameters of the projectile, [1,2].
30

NUMERICALANDEXPERIMENTALINVESTIGATIONOFAERODYNAMICCHARACTERISTICSOFSPINSTABILIZEDPROJECTILE

Frequency Modulated Continuous Wave (FM-CW).

of same type and range are used. The range of the


transducers was 1,75105 Pa. The atmospheric pressure
patm is measured by a Mensor quartz bourdon tube
absolute pressure transducer, pneumatically connected to
the pressure port in the wind tunnel exhaust. The range of
the transducers was 1,75105 Pa. The stagnation
temperature T0 is measured by a custom-made RTD probe
in the settling chamber of the wind tunnel. The pitching
and rolling angle of the model are measured by NPL
resolvers integrated in the model support mechanism. The
accuracy of the pitching angle reading was 0,05 degrees
and the accuracy of the rolling angle reading was 0,25
degrees, [2,3].

The performance of the radar system antenna:

The aerodynamic forces and moments acting on the


model are measured by ABLE 1.00 MKXXIIIA internal
six-component strain gauge balance. The nominal load
range of the balance was 2800 N for normal, 620 N for
side force, 134 N for axial force, 145 Nm for pitching, 26
Nm for yawing moment and 17 Nm for rolling moment.
The accuracy was approximately 0.25% F.S. for each
component. The data acquisition system is consisted of a
Teledyne 64 channels front end controlled by a PC
computer. The front-end channels for flow parameters
transducers were set with 30 Hz, fourth-order low pass
Butterworth filters and appropriate amplification. The
data from all analog channels are digitalized by a 16-bit
resolution A/D converter with the overall accuracy of the
acquisition system being about 0,05% to 0,1% F.S. of the
channel signal range. All channels are sampled with the
same 200 samples/s rate, [2,3].

X
q S

Monopulse Phased Array Multi Frequency Doppler


Radar MFDR Antenna,

Output Power: 120 Watts,

Antenna Gain: 40 dB,

Maximum Beam: 10x10,

Minimum Beam: 1x2,

Transmitter Type: Continuous Wave, synthesized


solid state,

Frequency: X-band, adjustable between 10400 and


10550 GHz,

Noise figure: 3dB,

Transmitter Type: Multiple Frequency CW, solid


state PLO, Operation Mode: CW, FM-CW, MF-CW,
RF

Bandwidth: 10 MHz

The values of the aerodynamic axial coefficient is


calculated on the basis of the negative acceleration, i.e.
retardation of the body in relation to the local coordinate
frame, bounded at the initial point of the flight, according
to the following equation,
Cx ( Mai , sr ) =

(3)

Vi Vi +1
4m
Vi + Vi +1 i , sr S ( xi +1 xi )

(5)

where are, m is mass of flight body projectile, S is crosssection area of projectile, Vi and Vi+1 are measured flight
velocities, xi and xi+1 are horizontal distances, i,sr is
average value of air density and Mai,sr is average value of
flight Mach number.

The axial aerodynamic coefficient in the body axis system


is calculated from the components of aerodynamic force
X, as,
Cx =

The measurement is consisted of the set proving ground


test of trajectory flight measurements at measured initial
conditions of flight and conditions of atmosphere, Picture
3. The results of the measurement are represented as timedependent values of position of the flight body, in polar
coordinates, with time resolution grade of 10-3 s.

Mach number Ma is calculated using the isentropic


relation:
1

2 p0
1
Ma =

1 pst

OTEH2016

(4)

3.2. The radar in-flight tests


The experimental in-flight tests are performed in the
facility of the TOC, with system of equipment for field
ballistic radar measurements and the instrumentation for
GPS and atmospheric measurements.
The system of equipment for field radar measurements is
consisted of the 3D ballistic radar, the acquisition system
and the support system. The radar measurement system is
based on Doppler principles, with monopulse phase
comparison and range measurements through integrating
range, Multi frequency ranging, Frequency modulated
ranging and the comparison of principles. The range
measurements are done by integrating velocity, according
to Multi Frequency Continuous Wave (MF-CW) and

Picture 3. Trajectory Flight Measurement

The ballistic flight experimental test is consisted of ten

31

NUMERICALANDEXPERIMENTALINVESTIGATIONOFAERODYNAMICCHARACTERISTICSOFSPINSTABILIZEDPROJECTILE

measurements of 40 mm projectile at five different


elevation angles.

Momentum,

( ui u j ) =
x j

4. THE NUMERICAL AND SEMIEMPIRICAL RESEARCH OF


AERODYNAMIC CHARACTERISTICS

The research model is the model of the spin stabilized


projectile with following characteristics:

Reference diameter (caliber) 40 mm,

Total length ~ 5,2 caliber,

Nose length, ~ 3 caliber,

Boat tail length, ~ 0,5 caliber,

Center of gravity from nose, ~ 3,3 caliber.

+
xi x j

ui u j 2 ui
+
ij

+
x j xi 3 xi

(7)

( uiu j )
x j

where p is mean pressure, is mean density, is


molecular viscosity, ui and uj are mean velocities.
Reynolds stresses were given at term,
u u j
uiu j = t i +

x j xi

On the basis of the geometric and dynamic characteristics


of the research model and according to the performed
methods, the aerodynamic coefficients are derived. The
graphics of the characteristics of the aerodynamic
coefficients in relation to Mach number are given in the
paper. Mach numbers represents the characteristics of
flow field, i.e. the velocity of the projectile in relation to
the total atmosphere conditions.

2
u
k + t i ij ,
xi
3

(8)

where t is turbulent (eddy) viscosity, k is turbulent


kinetic energy. To correctly account for turbulence,
Reynolds stresses are modelled in order to achieve closure
of (7). The method of modelling employed utilizes the
Boussinesq hypothesis to relate the Reynolds stresses to
the mean velocity gradients within the flow.

The research deals with numerical simulation of static


aerodynamic coefficients. The governing equations are
given on the basis of Reynolds Averaged Navier-Stokes
equations, of the steady state flow. The derivatives of the
aerodynamic coefficients are improved in relation to the
measurements and results of the experiments. In this
chapter the results of the numerical calculation of the
aerodynamic coefficients are described, [4,5].

The numerical researches are provided with four


numerical results, presented in the paper as CFD1, CFD2,
CFD3 and CFD4, and described at Table 1.
Table 1. Numerical predictions

Mark

The research of aerodynamic data, presented in the paper,


is consisted of two aerodynamic predictions: the semiempirical aerodynamic predictions (ADP0), [2,3] and
numerical predictions with numerical software of
Computational Fluid Dynamics (CFD) incorporated into
Ansys Fluent software [4].
The prediction research model of body was 0,04 m
referent diameter and 5,2 referent diameter long. The
semi-empirical aerodynamic predictions (ADP0) are
performed using aerodynamic prediction technique
presented at [2]. The results of the aerodynamic
prediction technique ADP0 are axial force aerodynamic
coefficients at zero yaw. The values of zero-yaw drag
coefficient are consisted from the results of the
components, according to the body sections and flow
characteristics.

Domain Mesh Type


Quad
mapped

Number
of Cells

Turbulence
Model

75103

2 equation
k- RNG

CFD1

2D

CFD2

2D

Hybrid/ Tri19104
Quad

2 equations
k- RNG

CFD3

2D

Hybrid/ Tri21104
Quad

3 equations
t-k-kl-

CFD4

3D

Hexahedra

18105

2 equations
SST k-

The set of numerical simulations CFD1, with oneequation turbulence model Spalart-Almaras, in 2D
numerical domain, consisted of about 75 000 quadrilateral
cells, is performed at three flow regimes (Boundary Layer
of ~0,025 d size and 1,032 aspect ratio).
The set of numerical simulations CFD2, with twoequations turbulence model RNG k-, in 2D numerical
domain consisted of about 19104 triangle cells, is
performed at three flow regimes (Boundary Layer of
~0,015 d size and 1,2 aspect ratio).

The aerodynamic numerical prediction simulations of the


research model of projectile are performed using the CFD
code. The governing equation is based on Reynolds
Averaged Navier Stokes equations (RANS), given by
equations for the conservation of mass and momentum,
and presented in the following forms [4,5,6,7]:

Third set of numerical simulations CFD3 in 2D numerical


domain, consisted of about 21104 triangle cells, is
performed at three flow regimes, with three-equations
turbulence model t-k-kl- (Boundary Layer of ~0,001 d
size and 1,2 aspect ratio).

Continuity,

( ui ) = 0 ,
xi

OTEH2016

(6)

The fourth set of numerical simulations CFD4 is


performed at three sonic regimes, in 3D numerical

32

OTEH2016

NUMERICALANDEXPERIMENTALINVESTIGATIONOFAERODYNAMICCHARACTERISTICSOFSPINSTABILIZEDPROJECTILE

The outer boundaries were set to the free stream


conditions at standard atmosphere for the total
temperature, T = 288 K and the total pressure p = 100 000
Pa. The inner boundary of the model was modeled as noslip, isothermal wall boundary.

domain, consisted of about 1.8 million hexahedral cells,


with two-equation turbulence model SST k- (Boundary
Layer of ~0,0002 d size and 1,2 aspect ratio).
The applied turbulence models are, [4,5]:
Spalart-Almaras, one equation turbulence model
(CFD1),
RNG (Re-Normalization Group) k-, where
additional terms improve accuracy, and represents
turbulence dissipation rate (CFD2),
Transitional t-k-kl- model, where kl represents
laminar kinetic energy (CFD3),
SST (shear stress transport) k-, where the turbulent
viscosity is computed through solution of two
additional transport equations for the turbulent
kinetic energy k, and either the turbulence specific
dissipation rate, , (CFD4).

The criteria of convergence were constant values of


aerodynamic coefficients of axial force, within last 100
iterations and the residuals below 10-6, for CFD1, CFD2
and CFD3 simulations and bellow 10-4 for CFD4
simulations.
The results of computational fluid dynamic simulation
was obtained through sets of separated calculations for
different Mach numbers of three flow regimes and
different values of angle of attack in the range of 0 to 10
degrees.

The numerical discretization of the computational domain


around the model was designed with mentioned four types
of mesh (Table 1). The computational domain for 2D and
3D meshes is created with longitudinal length of 75 to 80
referent diameter of model and lateral width of about 25 40 referent diameter of model. The spatial discretization
schemes of the equations were second order upwind. The
computational domains are presented in Picture 4.

4. THE ANALYSIS OF RESULTS OF


AERODYNAMIC AXIAL COEFFICIENT
According to the performed research, results are presented
as dependencies of flow regimes, i.e. Mach number, and
also in relation to the angle of attack.
In Picture 5 was presented the axial aerodynamic
coefficient in relation to Mach number. The results of
CFD predictions are marked as CFD1 to CFD4. The
experimental results are: EXPA for aerodynamic wind
tunnel tests and EXPB for ballistic proving ground tests.
The semi-empirical aerodynamic prediction results of
zero-yaw drag coefficient are marked as ADP0.
CFD1
CFD2
CFD3
CFD4
EXPA
EXPB
ADP0

0.6
0.5

x0

0.4
0.3
0.2
0.1
0
0

0.5

1.5
Ma

2.5

0.5

CFD4
EXPA
EXPB
ADP0

0.45
0.4
0.35

x0

0.3
0.25
0.2
0.15
0.1
0.05
0
0

0.5

1.5
Ma

2.5

Picture 5. Axial AD coefficient vs. Mach number

The CFD3 computational prediction in 2D domain is


enable fast and qualitative results through all flow
regimes. The CFD4 prediction is shown very good
results, in 3D domain, and enabled the analyses of the
coefficient in relation to the angle of attack. Further
research based on CFD4 prediction is shown good

Picture 4. The part of computational domains:


a) CFD1, b) CFD2, c) CFD3, d) CFD4
33

OTEH2016

NUMERICALANDEXPERIMENTALINVESTIGATIONOFAERODYNAMICCHARACTERISTICSOFSPINSTABILIZEDPROJECTILE
=4,2o

agreement of other static and dynamic coefficients with


experimental results.
0.4
0.35

Ma=0.2 CFD4
Ma=0.2 EXPA

Ma=0.5 CFD4
Ma=0.5 EXPA

160

EXPB

EXPB

140

SIM

Ma=0.7 CFD4
Ma=0.7 EXPA

120
100
y [m]

0.3

80

0.25
Cx0

60
0.2

40

0.15

20

0.1

0
0

500

1000

1500

0.05
0
0

Ma=0.8 CFD4
Ma=0.8 EXPA
0.6

Ma=0.9 CFD4
Ma=0.9 EXPA

5
Ma

Ma=0.95 CFD4
Ma=0.95 EXPA

Cx0

0.3

0.2

0.1

Ma=1.5 CFD4
Ma=1.5 EXPA

Ma=2.0 CFD4
Ma=2.0 EXPA

5
Ma

10

Ma=3.0 CFD4
Ma=3.0 EXPA

Ma=2.5 CFD4
Ma=2.5 EXPA

0.4

Cx0

4000

4500

5000

The trajectory in-flight measurements are shown


agreement with simulated trajectory based on the CFD
aerodynamic results. The trajectory with all its elements,
as velocity and angle, are shown same trend and value
level.

0.5

0.3

References

0.2

[1] McCoy Robert L.: Modern Exterior Ballistics,


Schiffer Military History, Atglen PA, 1999.
[2] Jerkovi,D., Samardi,M.: The aerodynamic
characteristics determination of classic symmetric
projectile, pp. 275282, 5th International
Symposium about design in mechanical engineering,
KOD, Novi Sad, 2008.
[3] Jerkovi,D., Ilic,S., Kari,A.. Regodic,D.: The
research of influence of the aerodynamic coefficient
on the stability of the axis-symmetric projectile, 4th
International Scientific Conference on Defensive
Technologies, OTEH 2011, Belgrade, Serbia, 6-7
October 2011.
[4] Ansys Inc., Ansys Fluent Theory Guide, Release
14.0, Canonsburg, PA, 2011.
[5] Masatsuka,K. (2013) I do like CFD, Vol. 1,
Governing Equations and Exact Solutions, CRADLE
and NIA CFD Seminar Series, p. 299.
[6] DeSpirito,J., Silton,S., Weinacht,P.: Navier-Stokes
Predictions of Dynamic Stability Derivatives:
Evaluate of Steady-State Methods, ARL-TR-4605,
US Army Research Laboratory, Maryland, 2008.
[7] Silton,S.I.: Navier-Stokes Computations for a
Spinning Projectile from Subsonic to Supersonic
Speeds. J. Spacecraft Rockets 2005, 42 (2), 223-231.

0.1

0
0

3500

The numerical researches of the aerodynamic coefficient


are shown very good agreement with experimental results.
The 3D numerical research with three equations SST-k-
is shown qualitative results and enabled analysis in the
relation to the angle of attack. Also, the numerical
research in 3D numerical domain is very convenient for
further research of the aerodynamic coefficient in relation
to the angular velocities (spin) and angle of attack,
separately and coupled.

0.4

3000

5. CONCLUSION

Ma=1.2 CFD4
Ma=1.2 EXPA

0.5

0
0

2500
x [m]

Picture 7. Trajectory in Vertical plane

10

Ma=1.1 CFD4
Ma=1.1 EXPA

Ma=1.0 CFD4
Ma=1.0 EXPA

2000

5
Ma

10

Picture 6. Axial AD coefficient vs. AOA


a) subsonic, b) transonic, c) supersonic regime

In Picture 6 are presented the results of axial aerodynamic


coefficient of numerical prediction RANS SST-k- in 3D
numerical domain (CFD4), and the aerodynamic wind
tunnel tests EXPA, in relation to the angle of attack
(AOA) for different Mach numbers, of three groups of
flow regimes.
The differences between experimental and numerical
results are caused by the limitation of the mounting
measuring equipment on the model base, Picture 2.
The deviation of the results at the zero AOA are 3,6%,
and increases with the values of AOA. The deviations are
the smallest in the supersonic flow regime.
The agreement of the experimental field results of the
trajectory and calculated aerodynamic coefficient with
CFD4, are shown in Picture 7 (EXPB1 and EXPB2). The
trajectory of projectile (SIM) is simulated with 6DoF
model with the values of aerodynamic coefficient
obtained from CFD4.

34

A HIGH SPEED TRAIN MODEL TESTING IN T-32 WIND TUNNEL BY


INFRARED THERMOGRAPHY AND STANDARD METHODS
SLAVICA RISTI
Institute Gosa, Belgrade, s1avce@yahoo.com
SUZANA LINI
Institute Gosa, Belgrade, sumonja@yahoo.com
GORAN OCOKOLJI
Military Technical Institute, Belgrade, ocokoljic.goran@gmail.com
BOKO RAUO
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, brasuo@mas.bg.ac.rs
VOJKAN LUANIN
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, vlucanin@mas.bg.ac.rs

Abstract: Experimental techniques in wind tunnel tests were always the state-of-art technologies and provide high
quality results. In the resent experiments of the high-speed train model, run at the semi-open low-speed wind tunnel of
the VTI, the infrared thermography (IRT) measurements were introduced. The intention was to observe the flow field
behavior in vicinity of the high-speed train model surface by a multidisciplinary approach. Reynolds analogy make
possible to correlate fluid dynamic to thermal field. By using high-sensitivity thermographic systems, temperature
pattern on the surface of test model can be analyzed. Temperature distribution on the model surface is very complex,
varies in the time and space, and depends on many combined effects related with flow and model characteristics. IC
thermography can indirectly perform flow visualization in the boundary layer. Two standard techniques, aerodynamic
drag measurements and the flow visualization with a TiO2 emulsion were used, too. Experiments were performed for
three velocities. The results show good correlation and thus verified the use of IR thermography for the non-invasive
investigation of flow in the boundary layer in precision wind tunnel testing.
Keywords: wind tunnel, high-speed train model, visualization, thermography
When reviewing a development of the high-speed trains
[5], in last decades, it is noticeable that the aerodynamics
steps are in a focus [5-8]. The aerodynamic characteristics
of the high-speed trains were involving two approaches
wind-tunnel testing and numerical methods [9]. Although
the tendency is to cut-off the expenditure of the projects,
the wind-tunnel tests remained irreplaceable. Nowadays,
following of the trends is a necessity, furthermore, in a
field of the railway vehicles trends in aerodynamics meant
lowering of the power consumption and cheaper
transportation. On the other hand, a trend of application of
the state-of-art technologies of the measurements needs
multidisciplinary research. At the VTI for a decades, upon
a significant experience in the fields of a non-aeronautical
wind-tunnel research [10], contemporary measurement
techniques, as here presented IRT, were applied. As
overlapping technique, the oil emulsion visualization, was
used for a justification of the CFD observations, as a
necessary step in aerodynamic practice [11].

1. INTRODUCTION
With the intention to contribute to the challenges in a field
of high-speed train aerodynamics, wind tunnel tests,
applying the IRT on an initial model of the high-speed
train, were performed in the wind tunnel T-32. A task was
to observe the applicability of the thermography as a
testing technique in the wind tunnels at low-speed
conditions. Furthermore, performed tests gave the insight
to the temperature distribution of the simplified highspeed train in the presence of the ground. The results are
used as the input parameters for the computational fluid
dynamic, CFD, observations at the bionic design process
of the high-speed train [1].
The wind tunnel facilities and testing techniques were
generally developed upon the highest standards with a
purpose of advanced researches of aeronautical and
missiles models and fluid phenomenon from subsonic to
hypersonic speeds [2-4]. Development in other areas
perceived and used benefits of the wind tunnel testing
and, by time, many of non-aeronautical tests were
developed from the buildings, over the racecars to the
high-speed trains [5].

The IRT, become a very helpful non-invasive temperature


measuring method for instance in the cases of: the object
surface temperature and insulation checks of the industrial
facilities [12]; the energy efficiency and building
35

OTEH2016

AHIGHSPEEDTRAINMODELTESTINGINT32WINDTUNNELBYINFRAREDTHERMOGRAPHYANDSTANDARDMETHODS

between the model surface and the surrounding air. A BL


over the flat plate is consisted of three zones: laminar
from the leading edge of the plate, further, transition and
afterwards turbulent regime. In accordance to the
Reynolds analogy [16-18], the velocity boundary layer
(VBL) and corresponding thermal boundary layer (TBL)
have the similarly defined BL thicknesses. Both, for the
VBL and TBL the BL edges thickness are defined as
values near the free stream values (0.99V and 0.99T).
Reviewing the flow, the VBL exists right from the leading
edge, while the TBL a bit later, thus the flow over the
small length from the leading edge has the equal value to
the free stream temperature. A development of the VBL
and the TBL results, for both, in thickening (Picture 1a.),
from laminar VBL (parabolic type of the edge) to a
turbulent VBL (Picture 1b.), but the characters of velocity
and the temperature profiles differ (Picture 1c.).
Depending of the flow conditions, at a transition point,
flow starts transformation, from the laminar to the
turbulent flow. Inside the turbulent flow, a buffer layer
and viscous sub-layer exist and grow along the turbulent
layer. However, temperature values, as well as the
thickness of the TBL differ to the flow regimes thus the
transition point might be detected by temperaturedifference measurements.

insulation of the museums, heritage buildings [13]; also a


material loading behavior as well a defect or crack
recognition. Nowadays, for wind tunnel-testing, the IRT
is of the great support for insight of the fluid flow
convective heat transfer [24] and boundary layer
transition [14,15]. The main advantage of the IRT in the
wind-tunnel testing are the test-budget savings because
the IRT gives very fast, transient and associative data
upon which advanced, and reciprocally high time/cost,
experiments of the boundary layer are employed.

2. HEAT TRANSFER IN THE BOUNDARY


LAYER
The air wetting the smooth flat plate model surface in twodimensional case, for the simplicity, in a general case of an
incompressible flow is considered. The energy equation
leads to an energy balance that is subjected to the internal
energy e ( e = cvT ), the conduction of heat (caused by
collision of the molecules of the fluid), the convection of
heat by a flow of fluid (caused by the flow streaming over
the surface in contact) and the heat sourced by a friction [1719]. The convection in flow is recognized in two types as:
natural, when the flow was powered by the density
difference caused by the heat transfer between the fluid and
the model, and forced convection that is powered by the
mechanical source of fluid motion.
The conduction is generally described by the Fouriers
Law where the coefficient of heat transfer between the
model and a fluid in contact, qconduction, is given in (1).

( )

qconduction = k T
n

n=0

(1)

(a)

where k denotes a thermal conductivity and ( T n )n = 0 is


a magnitude of the temperature gradient in direction
normal to the model surface [17,18]
The forced heat convection involves the air motion and
the heat conduction, is described by the Newtons Law of
cooling [17] defining a convective heat transfer, qconvection,
as in (2)

qconvection = h Tsurface T , [W/m2]

(b)

(2)

Picture 1. (a) Details of the VBL and the TBL, (b)


laminar and turbulent velocity profiles with corresponding
thicknesses, l and t, respectively and (c) different
temperature profiles at the TBL and corresponding TBL
thickness, T

where h is heat transfer coefficient, brackets define a


difference of the surface temperature, Tsurface, and a
stagnation temperature of the flow, T. The convective
heat transfer coefficient, h, depends on the air properties,
flow characteristics and quality of the model surface,
especially depends on its roughness.

Therefore, the sum temperature of the model surface is a


combination of different heat transfer types. The influence
of the heat transfer by radiation, from the other bodies
near the model, is not significant in considering flow and
temperature behavior.

Furthermore, the phenomenon of the momentum and heat


transfer, described by the Reynolds analogy is valid for
the air, while the Prandtl number is unit [16-18].
kf
h = 1 c f V
, c flam 0.664 , c fturb 0.027
2

Re
Re

(c)

3. IR THERMOGRAPHY

(3)

IRT is based on the measurement of the thermal energy


radiated from the model surface. The measurement of
temperature distribution over the model-wetted surface by
IRT, differs from the other measuring technique. The

kf is a thermal conductivity of the fluid, v is a kinematic


viscosity, Re is Reynolds number.
Boundary layer (BL) flow conditions affect the heat flux
36

AHIGHSPEEDTRAINMODELTESTINGINT32WINDTUNNELBYINFRAREDTHERMOGRAPHYANDSTANDARDMETHODS

OTEH2016

radiation of a near bodies and heat sources has to be


involved as an influence on the measurements.

intensities from all of the three described influences


produce total radiation intensity defined as (4)[13]

All the bodies, by the temperature above absolute zero,


thermally radiates continually in time. The heat transfer by
radiation is specific in comparison to other two types,
conduction and convection. It is caused by an
electromagnetic emission and absorption, and is occurring
either in the vacuum. Furthermore, the velocity of thermal
radiation equals the speed of light with energy proportional
to a fourth power of the surface absolute temperature [13].
The total radiated energy, obtained from the Planks Law,
is described by the StefanBoltzmanns Law

Etotal = Eobj + (1 ) Eenvir + (1 ) atm

E T = T 4 , W2 ,
m

(5)

The Picture 2. represents the scheme of the IRT


measurement. The model, surrounded with the
environment, emits the radiation in the all directions. The
intensity of the radiation that is measured by the IRT
camera is described by the eq. (5). For the valuable results
four main parameters matters: thermal sensitivity of the
IRT camera (signal overstepping noise), scanning rate
(frame rate) for following the flow dynamics, resolution
of imaging (for high quality data recording), and intensity
of resolution [16].

(4)

where = 5,669710-8 [Wm-2K-4] denotes Stefan


Boltzmann constant and T [K] is a surface absolute
temperature. Total radiated energy is lower for the real
body comparing to the black body [17].
Emitted radiation is dependent on the body temperature.
Radiation laws are defined for the thermally black body
[13], ideal body, but real bodies i.e. their surfaces, have
different characteristics. By that, radiation of the real
surfaces are affected by a surface material, surface
finishing quality, radiation wave length, angle of emission
or absorption, spectral distribution of the incoming
radiation and transparency. In the contrary to the black
body, real bodies behave according to their emittance and
the characteristics of IR radiation, like gray (=const.) and
non-gray - spectral bodies =(). The spectral emissivity
coefficient, = E ( ) Eblack ( ) , presents the body ability
of radiation and is defined as a ratio of the emissive
powers of the real, E(), and the black body, Eblack(). As
the emissive power of the real body is less than the black
body, the emittance for the real body is in the range from
0 to 1, while for the black body is unit.

Picture 2. Illustration of the method of temperature


measurement by the IR thermography

The IR detector has a significant role in defining the


inputs, thus it is the source of the signals, for which is
required to have as low as possible noise. Beside the IR
detector, on the signal quality influences also the
difference of the phenomenon of flow transition. The
frame rate of the camera gives the possibility to follow the
flow dynamics. Unfortunately, after flow dynamics
influence on the thermal response, also the thermal
characteristic of the model, heat capacity and
conductivity, are influencing to overall phenomenon
dynamics. Knowing all these facts, the special care have
to be taken on the measurements of the temperature by
IRT, model manufacturing purposed for thermal imaging
and the environmental conditions adjustments [15,16].

IR thermography is a temperature measuring method that


involves use of the IRT cameras (Picture 3.) and
commonly is called thermal imaging. The IRT camera is
collecting the radiation from the three different sources:
(a) from the body surface of interest (dependence of the
body temperature); (b) from an environment in which the
body is placed for imaging (this radiation was actually
reflected from the body of interest but is sourced from the
near-bodies) and (c) from the atmosphere. The
atmosphere, by means of IRT, presents the air
environment in witch the body is immersed and the
atmosphere in witch the IRT camera is fixed (as both are
the sources of the radiation and the absorbers). Radiation
from the body surface is defined as Ebody , where: is

4. DESCRIPTION OF THE EXPERIMENT


Wind tunnel facility. The thermal imaging tests were
done at the closed circuit, semi-opened low-speed wind
tunnel (WT) T-32, at the VTI, Belgrade. A facility was
intentionally built for testing aeronautical models, but
after test section adaptations, it was prepared for this nonaeronautical testing. Picture 3 shows the facility the from
the side view. The flow is continuous, forced by the DC
motor propeller manually controlled, in a range up to 70
m/s. A test section, TS, is opened, elliptical in a crosssection (1.8 m 1.2 m ) with length of 2 m. The internal
TS surfaces are coated with matte color. The estimated
turbulence factor at the T-32 WT is TF=1.14.

emittance, and is transmissivity of the atmosphere, Ebody


is radiation intensity at the object-model temperature, Tobj,
corresponding to the black body. The radiation intensity
emitted from the near bodies is described as (1 ) Eenvir ,
where: (1-) is surface emissivity, and Eenvir is the black
body environmental radiation intensity, at temperature of
the atmosphere. Further on, (1-)atm is the radiation
intensity sourced by the atmosphere, where (1-) is
atmosphere emissivity. Summarizing, the radiation

The testing set up in the cross view is presented in


Picture 4. The model is positioned at the center of the test
section, TS by the struts (front and rear). The ground was
made from a wood, coated with simple glossy lacquer. In
longitudinal direction, metal profiles were jointed to
37

AHIGHSPEEDTRAINMODELTESTINGINT32WINDTUNNELBYINFRAREDTHERMOGRAPHYANDSTANDARDMETHODS

OTEH2016

appropriate for these tests. The recipe of the emulsion


contained 30ml of the paraffin oil, 5ml of the Olean acid
and 10g of TiO2. The upper and side model surfaces were
covered with the emulsion spots.

prevent ground deformations under the flow influence

The experimental procedure was divided into the


temperature measurements and the visualization
separately. For each of the test types, the task was to set
the flow conditions through the shortest time, but
respecting the abilities of the power unit of the WT. The
significant note for testing procedure is connected with
the test run time and test brake duration. For visualization
test, standard test run were performed, restarting just after
model test preparation, but the test for thermal imaging
needed test brakes for model cooling down to the
environmental temperature. An each thermal imaging test
run started before the WT run by capturing the null image
for checking of the actual model thermal condition and
the results analyze.

Picture 3. Low-speed semi-opened wind tunnel


SHST model. The SHST model was designed for the
force measurements and flow visualization, made from a
piece of wood, 1m long, in geometry presented in
Picture 5. The SHST finishing coating consisted of one
base coating over the wooden surface and two layers of
black matte lacquer. The SHST surfaces were polished in
finishing process to uniform roughness.

5. RESULTS AND DISCUSSION


The test results were obtained for these cases: (A) the WT
stepping acceleration, the model was at the beginning on
the room temperature (thermal images were recorded after
the flow conditions were stabilized on each of the selected
velocity steps; (B) the WT continual deceleration (the
frame velocity data were marked while measuring) and
(C) WT continual acceleration (the framevelocity data at
selected velocities were marked while measuring).
The diagrams on the Picture 6. correspond to the
measuring lines L1-L4 from the Picture 5 (L4 was placed,
from the perspective, along the SHST roof). A first along
the SHST side, while another 7 were in an array along
lines to the trailing edge. The Picture 6. shows
temperatures, along the measuring line L1 (spots C1-C9)
during stepping acceleration, when SHST was at room
temperature at the beginning. Warming of the model is
linked to warming of the SHST surface, produced by
flow-surface thermal-momentum interaction (friction,
convection, and conduction) inside the boundary layer.
Furthermore, uniformity of temperatures is point-out to
the turbulent BL existence over the SHST sidewalls.

Picture 4. The SHST model and the IRT equipment set


for tests (L=2m)

Picture 5. The SHST model geometry


IRT camera. For the temperature measurements the
FLIR E40 SC camera was used. The main IRT camera
characteristics are: measuring range -20C to 650C (with
two selectable options from -20C to 650C or -20C to
120C) a resolution 160 x 120, thermal sensitivity <
0.07C, standard optics of 25 x 19, the measuring error
of +/- 2% or 2C, bolometer wit no cooling, frame rate
30Hz, manual focusing, zoom selection 1-2 times.
Thermal images, thermograms, were read by specialized
software FLIR Researcher and FLIR Tools.

Picture 6. Temperatures measured along L1, at spots


C1-C9, V=0-55 m/s (from the thermograms).

The oil emulsion visualization was applicate as a most

Dependences of the temperature against velocity, again


38

AHIGHSPEEDTRAINMODELTESTINGINT32WINDTUNNELBYINFRAREDTHERMOGRAPHYANDSTANDARDMETHODS

over L1 and C1-C9, is showing intensive temperature


increase of the SHST for V>40m/s (Picture 7).

OTEH2016

deceleration in range V=55-0 m/s and acceleration in


range V=0-55 m/s (from video capturing)

(a)

Picture 7. Temperatures along L1, C1-C9, stepping


acceleration ,V=0-55 m/s (from imaging)

The tendencies of the surface temperature behavior,


measured along L1, during WT continual deceleration and
acceleration are following dynamic of the velocities
changes (Picture 8). Continual deceleration was manually
controlled by gradual decreasing of velocity, the surface
temperature followed it but with notation that the
temperature at the end of this process was higher than the
room temperature. After the time brake of 10 min
between runs, the WT was started again, now with
continual accelerating in dynamics to intentionally control
the velocity at exact values. After velocity was set, a
thermogram was recorded, and acceleration continued.

(b)

WT continual deceleration, SHST-just after the test run,


from video sequences.
The Pictures 9a-c represent the comparative view of the
flow visualization and thermograms for three velocities
40 m/s, 50 m/s and 55 m/s. Flow visualization justified
the turbulent BL existence while thermograms justified
the heating of the SHST surfaces during the acceleration.

(c)
Picture 9. Comparative view of the flow visualization
and thermograms at (a) 40 m/s, (b) 50 m/s and (c) 55 m/s

6. CONCLUSIONS
This paper presents the results of tests for experimentally
flow visualization and temperature measurements on the
model surface, using IRT, applied to the high-speed train
model in low-speed wind tunnel.
Obtained IRT results are in good correlation with the oil
emulsion flow visualization. Measurements, made during
this experiment, could not be used for precise
determination of the boundary layer transition, because
the model is small in size, smooth-surface finished and
tested in the low speed flow. However, the IRT results
show the temperature changes during the wind tunnel
runs. The estimated temperature difference between
laminar and turbulent flow is less than 1C, what is less
than temperature arise due to an air-surface friction.

Picture 8. Temperatures along L1, C1-C9, continual


39

AHIGHSPEEDTRAINMODELTESTINGINT32WINDTUNNELBYINFRAREDTHERMOGRAPHYANDSTANDARDMETHODS

OTEH2016

2D Flow Fields for the Bionic High-Speed Train


Concept Designs Inspired with Aquatic and Flying
Animals, Proceedings of the 6th International
Scientific Conference on Defensive Technologies,
OTEH 2014 (2014) Military Technical Institute,
Belgrade, Serbia, 44-49,
[10] Puhari,M., Luanin,V., Lini,S., Mati,D.: Research
Some Aerodynamic Phenomenon of High Speed
Trains in Low Speed Wind Tunnel, Proceedings of
the 3rd International Scientific and Professional
Conference CORRIDOR 10 - A sustainable way of
integrations, October 25th (2012) Belgrade, Serbia,
Risti,S.,
Stefanovi,Z.,
Kozi,M.,
[11] Lini,S.,
Ocokolji,G.: Experimental and Numerical Study of
Super-Critical Flow Around the Rough Sphere,
Scientific Technical Review, 65 (2) (2015) 11-19.
[12] Kozi,M., Risti,S., Katavi,B., Lini,S., Risti,M.:
Determination of the Temperature Distribution on
the Walls of Ventilation Mill by Numerical
Simulations of Multiphase Flow and Thermography,
Proceedings of the 5th International Congress of
Serbian Society of Mechanics, June 15-17. (2015)
Arandjelovac, Serbia,
[13] Risti,S., Poli-Radovanovi,S.: Termografija u
zatiti kulturne batine, Institut GOA, Jan 15, 2013.
[14] Carlomagno,G.M., Cardone,G.: Infrared thermography for convective heat transfer measurements,
Exp Fluids, 49 (2010) 11871218.
[15] Crawford,B.K.,
Duncan,Jr.G.T.,
West,D.E.,
Saric,W.S.: Laminar-Turbulent Boundary Layer
Transition Imaging Using IR Thermography, Optics
and Photonics Journal, 3 (2013), 233-239,
dx.doi.org/10.4236/opj.2013.33038
[16] Simon,B., Filius,A., Tropea,C., Grundmann,C.: IRThermography for Dynamic Detection of LaminarTurbulent Transition, 18th International Symposium
on the Application of Laser and Imaging Techniques
to Fluid Mechanics, July 4-7, (2016) Lisbon,
Portugal
[17] Lienhard,IV,J., Lienhard,V,J.: A Heat Transfer Textbook, Phlogiston Press, Cambridge Massachusetts,
2008.
[18] Schlichting,H.: Boundary-Layer Theory, 7th edn.,
McGraw-Hill, New York, 1979. ISBN 0-07-055334-3.
[19] Anderson,J.D.Jr.: Fundamentals of Aerodynamics,
McGraw-Hill Series in Aeronautical and Aerospace
Engineering, 2nd ed., 1991,

Further development of model design and testing


methods, are recommended. The application of IRT in
wind tunnel experiments and use the high sensitive type
of IRT camera can record thermograms that are more
precise, over the higher resolution and thermal sensitivity,
thus it is recommended for the BL research at the low
speed wind tunnels.

ACKNOWLEDGMENTS
The authors are grateful to the Ministry of Education,
Science and Technological Development, Republic of
Serbia, for supporting this work as a part of the researches
through the financed Projects TR-35045 and TR 34028.

REFERENCES
[1] Rauo,B.: Bionics in Design, Faculty of Mechanical
Engineering University of Belgrade, (Belgrade)
eBook on CD (in Serbian), 2014.
[2] Pope,A., Harper,J.: Low-Speed Wind Tunnel Testing,
Wiley, Jan 1, 1966.
[3] Mrkalj,N., umonja,S.: Ispitivanje modela sa
prostrujavanjem u aerotunelu T-32, Scientific
Technical Review, XLVI (4-5) (1996) 51-59.
[4] Ocokolji,G., Damljanovi,D., Rauo,B., Isakovi,J.:
Testing of a Standard Model in the VTIs Largesubsonic Wind-tunnel Facility to Establish Users
Confidence, FME Transactions 42 (2014) 212-218.
[5] Raghunathana,R.S.,
Kimb,H.D.,
Setoguchi,T.:
Aerodynamics of high-speed railway train, Progress
in Aerospace Sciences, 38 (2002) 469514.
[6] Baker,C.: The Flow Around High-Speed Trains,
BBAA VI International Colloquium on: Bluff Bodies
Aerodynamics & Applications, July 20-24 (2008),
Milano, Italy,
[7] Baker,C.J., Brockie,N.J.: Wind Tunnel Tests to
Obtain Train Aerodynamic Drag Coefficients:
Reynolds Number and Ground Simulation Effects,
Journal of Wind Engineering and Industrial
Aerodynamics, 38 (1991) 23-28.
[8] Cheli,F., Rocchi,D., Schito,P., Tomasini,G.: Steady
and moving high -speed train crosswind simulations.
Comparison with wind-tunnel tests, Proceedings of
the 9th World Congress in Railway Research, May
22-26 (2011) WCRR Lille
[9] Linic,S.,
Rasuo,B.,
Kozic,M.,
Lucanin,V.,
Puhari,M.: Comparison of Numerically Obtained

40

AERODYNAMICS OF THE HIGH SPEED TRAIN BIO-INSPIRED BY A


KINGFISHER
SUZANA LINI
Institute Gosa, Belgrade, sumonja@yahoo.com
BOKO RAUO
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, brasuo@mas.bg.ac.rs
MIRKO KOZI
Military Technical Institute, Belgrade, Serbia, mkozic@open.telekom.rs
VOJKAN LUANIN
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, vlucanin@mas.bg.ac.rs
ALEKSANDAR BENGIN
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, abengin@mas.bg.ac.rs

Abstract: With the intention to contribute to aerodynamic optimization of transport vehicles, the high speed train was
biomimic with a kingfisher. Presented bionic concept design was defined after a series of hydrodynamic and numeric
experiments of bionic model of the kingfisher. The aerodynamic characteristics of the high speed train were observed in
three configurations, under various railing conditions, as follows: in a free flight (up to critical local conditions), in a
presence of ground and passing through the tunnel. Obtained results presents the main aerodynamic characteristics of
the bionic high-speed train, and in further, analyzed from the view of a design source. Adequate results were compared
with concept designs bio-inspired with other animals showing the best performances after biomimic with the kingfisher
up to velocities of 400km/h.
Keywords: aerodynamics, bionic, CFD, optimization.
model geometry, in 2-D space, that is hypothetic, but
represents the characteristics of the initial self-similar
profile in longitudinal direction under advanced
conditions. All the others self-similar nose/tail cones
cross-sections, on the future 3-D BHST, were assumed to
be much sharper and of less area than the longitudinal
one. Presented cases were wall-bounded and introducing
the flows more complex than in the reality. In the first
place, the idea was to apply experiences and knowledge
from aircraft aerodynamics and design in various ranges
of flight that were not represented up to know, to a design
process of the high-speed trains. Combining selected
geometries in two orthogonal and longitudinal directions,
created upon their flow responses is expected to give in
further research a final bionic design ready for wind
tunnel testing and numerical justifications [18,19], with
intention to predict the flow using a new method until real
scale rail run [20].

1. INTRODUCTION
Although very frequently pointed biomimicry with the
kingfisher as a model of aerodynamic modeling of the
high-speed train nose [1,2,3,4,5], at the contemporary
literature in a field of biomimicry and bionics [6] no
further details were reported. With the motivation to
apply biomimimcry on the industrial design, in this work
on the bionic high-speed train, BHST [7,8], we found
necessary to develop knowledge [9] and collect data [10]
about the phenomenon of the kingfisher in plunge-diving
[11], as a design base and to justify the artificial case by
the hydrodynamic tests [12,13,14]. The same parametric
model used for manufacturing the bionic kingfisher in a
free-fall water-entry, was used for design of the numerical
BHST in real scale, after adjustment of its contour.
Furthermore, the BHST was forced to run in different
hypothetic conditions. To understand the base
aerodynamic characteristics of the BHST, with drag and
pressure distribution in focus [15], the BHST was
released to a free flight from subsonic to transonic
velocity ranges [16]. Afterwards, the BHST was grounded
to run in open rail, and at last to pass the infinite tunnel.
The flow conditions were set to the ranges of interest
needed for further design adjustments. One may note that
this work in based on parameters of the flow velocity and

In this work, from the BHST we have required answers


about geometry quality from upon the flow reaction [7,8].
The available sources commonly are based on the
transient flow characteristics over the 3-D models, for
observing the passing train through the tunnel, just a
small number are covering 2-D or axis-symmetric cases,
like is the observation of the tube train [21].

41

AERODYNAMICSOFTHEHIGHSPEEDTRAINBIOINSPIREDBYAKINGFISHER

OTEH2016
selection was made for reaching good manufacturing
quality and quality non-macro video captures of the
phenomenon during the hydrodynamic tests.

2. NUMERICAL MODEL OF THE BHST


The numerical model of 2-D BHST was created by reengineering of a kingfisher longitudinal profile,
contouring layered images from the Nature. The software
PRO/Engineer WF4 was used for creation of 2-D body
contour. The contour, adopted as a longitudinal crosssection of maximal area, was translated into the formatted
text file, imported to the CFD space for mesh generation,
by the ANSYS Fluent Package. In this process, one data
source was a selected profile photo of the kingfisher,
Natural History Museum exhibit (Picture 1), while other
two, were used for positioning of the model relative to the
water, reconstructing the body pose the kingfisher during
water entry (Picture 2a)[7,8,9,11]. In Picture 2b, the
hydrodynamic model was shown with signed support
items. The wooden model simulated the kingfisher body
in the shape and mass [22].

The test model simulated the shape and dynamic


characteristics of the kingfishers rigid body. The center
of gravity was estimated by the PRO/Engineer tools and
those data were applied to the test model. Picture 2b is
shown the scheme of the model with its equipment: 1-the
bodyof the bionic model made from the layers of balsa, 2fixation directioning strings, 3-releasing string, 4-lead
weight inside the model, at the estimated center of
gravity, setting the rigid body dynamic similarity, 5fixation of the string.

3. HYDRODYNAMIC EXPERIMENT
The hydrodynamic tests were based on methods used for
testing a marine structures [12,13,14]. The test model was
set to accomplish the free-fall water entry with zero
deadrise angle, vertically, by fixed sliding strings. From
the height of 150mm over the water surface (measured
from the beak tip), the test model was released to slide
along strings and to enter water in a tank filled with clear
and calm water with average temperature of 260C. During
the water entry the videos were captured by a Samsung
ES95 (30fps), for about 10 repeats. The representative
sequence of the bionic kingfisher water entry is shown in
Picture 3.

Picture 1. The female kingfisher (museum exhibit from


the Natural History Museum, prepared in year 1939.)

Picture 3. Water entry of the bionic kingfisher model


(a)

(b)

The average estimated impact velocity, from the video


sequences, was 0.6m/s and a result shown good
correlation with the images from the Nature [11]. This
experiment justified, the statements from the literature
[2,3,4], the no-rippling water surface behavior during
impact. Furthermore, the biological design of the
kingfisher beak was shown no-rippling water surface over
the whole beak length at velocity about 0.6m/s, and by
that means classified the geometry acceptable for
application on the bionic high-speed train, BHST.

Picture 2. Models of the kingfisher (a) a re-engineered 3D body and (b) a 2-D bionic model for the hydrodynamic
tests (1-wooden bionic model, 2-directioning strings, 3releasing string, 4-lead weight, 5- fixation of the strings)
The layered images were combined to form a plungediving pose of the kingfisher. A selection of body size,
dependent on genetic and environmental conditions, was
the maximal distance between the beak tip and tail end.
Reference line followed the overlapping line of the beak
parts. Adopted model length was 190mm and weight
180g. One may note that model was over-dimensioned for
domestic environment (about 120mm), but such a

The longitudinal contour of the BHST contained two


faced and symmetrical power cars, in gross length of 50m
and height of 3m, with nose and tail geometries scaled
42

OTEH2016

AERODYNAMICSOFTHEHIGHSPEEDTRAINBIOINSPIREDBYAKINGFISHER

from the bionic numerical kingfisher. The nose and tail


contours, from the free ends were crossed with the BHST
train body shape by tangent radiuses, both, over the upper
and lower sides, resulted in the nose elongation of 4.471.

4. CFD EXPERIMENT
The CFD experiment was made by use of the ANSYS
Workbench 12 [23]. Geometry of the computational space
was made by the ANSYS Geometry software with
imported BHST contour in a form of organized text file
data.

Picture 5. The mesh around the BHST in the presence of


the ground and a detail with inflation layer cut

The CFD observations of the BHST were made for three


configurations: in free flight, on the open rail and passing
an infinitive tunnel, and observed under conditions at
three nominal Mach numbers M=0.2; 0.3 and 0.4.

The BHST bounded with tunnel requested much denser


mesh to answer the mesh quality requirements. Created
inside the domain with the same longitudinal dimensions
as others, height of 8m, the mesh consisted of 1537287
elements finally after the adaption (Picture 6). Beside the
fixed dimension of the tunnel length it will be treated, in
an analyze, as an infinite because no changes of geometry
conditions would occur under the set of steady flow
conditions. Mesh quality is described by maximal squish
of 0.57, skewness 0.78 and aspect ratio 18.

4.1. Geometries and meshes


The elliptical domain surrounded the BHST model in free
flight. As shown in Picture 4, smaller axes is a symmetry
axis, placed horizontally. The inlet domain boundary is
150m from the BHST nose tip, while its outlet boundary
is 300m behind the BHST nose. The vertical half-axis
height is 290m. For better mesh quality, the domain was
parceled into smaller surfaces following the BHST
contour. The resulted unstructured mesh consisted of
221284 cells, skewed up to 0.64 with aspect ratio up to
50.

Picture 6. The mesh inside the infinite tunnel

4.2. Numerical methods


For the purpose of understanding the phenomenon and
compare the results the first CFD calculations were made
with the same set up, as suggested [23,24] that close
results are workable even for differently sourced solution
methods nowadays (pressure- or density-based).
Nevertheless, in some cases the results have had not
follow the same behavior, why they were substituted.
Therefore, in those cases with assumed neglectable
compressibility effects, the pressure based solution
method was selected, and vice verse, were the
compressibility was assumed - the density-based solution
methods. In accordance to the solution methods, the
pressure-velocity coupling was selected to SIMPLE,
spatial discretization defined for gradients as the GreenGauss Node Based, standard pressure and second ordered
other parameters from the pressure to the energy.

Picture 4. Geometry of the domain around the BHST in


the free flight
The ground presence was treated through the rectangular
domain, keeping the same dimensions along the ground
(inlet distance from the nose was 150m, outlet of 300m),
height was 75m with the similar parceling as previous.
The BHST was lifted from the ground for 0.4m. Picture 5
represents the details of the mesh. The mesh for nearground configurations consisted of 383438 cells without
adaption, and with introduced adaption 1030505 cells (in
the compressible cases). Firstly made non-adapted mesh
was arranged to be denser over the BHST body and over a
ground by support of the two, elliptical and flat, bodies of
influence. Later adaption, through a solver tools, was
made by a rectangular field. All the mesh constructions
involved 10th layered inflation layer over the BHST
wetted surfaces.

Numerical models in use were the k- Realizable, RKE,


and Spalart-Allmaras, S-A. First one was used for a
treatment of the wall-bounded flows over the BHST
passing the tunnel and incompressible flows over the
BHST on the open rails, reasoning to ensure the quality
wall treatment, involving Non Equilibrium Wall
Function. The use of RKE was not motivated with the
calculation time economy as the problem are far less
complex than BHST passing the tunnel, for example,
under velocity of M=0.4 it was required to run over 3100
iterations to rich the convergent solution. The S-A was
used in the cases of the free flight and compressible flow
on the open rails, motivated with quality/time
43

AERODYNAMICSOFTHEHIGHSPEEDTRAINBIOINSPIREDBYAKINGFISHER

OTEH2016

optimization of the calculations thus the priority was


BHST in the tunnel. Furthermore, either one RANS
model was in use, if the compressible flow assumed the
energy equation was involved, the air was set as an ideal
gas, of viscosity described by the Sutherland third order
function. Boundary conditions were involved as
following: inlet - pressure far field, PFF; outlet pressure
outlet, ground - moving wall and top symmetry wall
while the tunnel walls were both moving walls. All the
moving walls have had equal velocities as the actual
BHST, treated as no-slip.
In cases of density based solver, the implicit formulation
with Roe-FDS flux type was selected, as well as the
Green Gauss Node Based discretization together with the
second ordered flow and modified turbulent viscosity
functions. The calculations were however supported with
the solution steering, set to pick the Courant numbers in a
range of 0.01 to 200, and default explicit under-relaxation
factor of 0.75.

Picture 8. Cp vs. M, BHST in free flight, lower side


The ground effect on the BHST was presented in Picture
9. The presence of the shock wave underneath is
noticeable on the first sight while the pressure distribution
over the upper side is similar over the range of M. The
phenomenon was occurred at M=0.4, but its initiation
seems to be started at M=0.3, according to actual
observations, from the compressible waves. One may note
the local similarity under the tail to the flow behavior
through the nozzle. Nevertheless, here the long channel
was a kind of the entrance tank, continually filled with a
flow from the entrance cone, formed by the nose curve,
with a gradual decrease of height.

5. RESULTS AND DISCUSSION


The functions of the pressure coefficient, Cp [25], plotted
against Mach number in a range of subsonic and transonic
velocities, from M=0.2 to M=1, for the BHST in the free
flight, were shown in Pictures 7 and 8 (by order for upper
and lower BHST side). The investigation, imagined to
overview flow character by applying, if it may be called,
Mach lens. An increase of the M was expected to left
traces, from initial cp disturbances to the shock waves,
favorable out of the nose or tail zones.

Picture 9. Cp vs. M, BHST on the open rail, upper and


lower side

Picture 7. Cp vs. M, BHST in free flight, upper side


The good agreement in cp behavior with similar airfoil
was present [16], while the flow disorders were spotted in
zones where the BHST nose is narrowing the BHST
body. The increase of M, up to 0.7, caused the presence of
compressible waves over the upper BHST surface.
Further M increase gradually transform the waves to
shock waves, in range 0.7<M<0.8, making them stronger
and moving along the stream from the entrance of the tail
cone to its tip. On the other side, increase of the M just
over 0.8-0.9 started to produce the shock waves of similar
strength as previous, near the BHST trailing edge (Picture
9) as a response on the sharper lower tail side, smaller
semi-angle of the tail cone.

The comparison of cp against M, over the wetted BHST


body passing the tunnel was shown in Picture 11. The
presence of shock waves, at M=0.4, underneath the BHST
is noticeable and compared to the case of BHST on the
open rail, the stronger one was occurred in the tunnel than
just the ground. If compared to flows over the BHST in
the infinite tunnel similarity was found between flows
under the train, in difference that wall-bounded flow
decreased the cp in further and moved it to the tail zone.
From the upper side, however, compressible zone was
formed over the central upper surface.
From the other side, cp distribution over the tunnel ground
44

OTEH2016

AERODYNAMICSOFTHEHIGHSPEEDTRAINBIOINSPIREDBYAKINGFISHER

(Picture 12) was shown the similarity in flow behavior in


comparison with the tube-train [21].

compressibility and at last formation of the shock waves


underneath the BHST. Further investigations involving
increase of M were not reasonable without either drastic
design changes of the BHST underneath contour, in the
manner of sailfish-like or barracuda-like designs, or
better, fully closing the channel for simulation purposes.
Comparing cd for kingfisher-like designs, one near ground
is lower that one in the tunnel, up to M<0.3, after which
due to wall-bounding from both sides cd become smaller
during passing tunnel that on the open rail.

Picture 10. Cp vs. M, BHST passing the tunnel, lower


side

Picture 12. Cd vs. M for various designs in the presenc of


grouns and the kingfisher-like for three configurations
free flight, open rail and passing tunnel

6. CONCLUSIONS
In this work the BHST was set to imagined conditions for
a series of designs. The kingfisher-like bionic design was
in the focus. The kingfisher beak design justification was
done by the hydrodynamic tests and afterwards scaled and
adjusted to the real scaled BHST. The numerical
observations have had the aim to find the answers linked
to the potential design problems. The present results of
the hydrodynamic tests were encouraging thus a
development of test procedure and analyze will be
continued in the future work. For presented kingfisherlike BHST design, was found that the method pointed to
zones that has to be re-designed, mostly on the lower side
of the BHST. However, the method using Mach lens is
applicable in the early stages of BHST designing, during
the profile selection phase, and is effective and timesaving. The aerodynamic characteristics of the bionic
kingfisher-like design shown the good potentials and
technological applicability, next to it is the barracuda-like
design. Other observed designs have had either not
desirable aerodynamic characteristics or they are not
suitable for conventional rail vehicle construction.
However, kingfisher-like design presented the stabile
characteristics over the M domain, but further
improvements are necessary. These improvements are
referred to both the design and designing method by
means they should lead to avoidance of the shock waves
underneath the BHST in operating M-range. A further
grading of the velocity, reasonable solution will be
observed in complete grounding of the BHST, so as it
would become the complex parametric wedge; the

Picture 11. Cp vs M, infinite tunnel walls: tunnel top and


tunnel ground
The main aerodynamic characteristic for the high-speed
train is a drag, thus it is directly related to the power
assumption, here represented by the non-dimensional
coefficient, cd [25], referred to the referent cross-section
area of 3m2. In Picture 12, cd was plotted against free
stream M. The character of the plots for the BHST
running freely and on the open rail are similar, but
displaced by means of both parameters, cd and M, giving
the advantage to the free flight, as it was expected.
Comparing to the airfoils imposed to the free subsonic
and transonic flows, described by the gasdynamic [17],
results of this work showed a good agreement. The flow
at the infinite tunnel, have had the largest drag
coefficients in the range of applied M. As one may note,
the range of M in the case of the BHST passing tunnel is
narrow after selection, M=0.2-0.4, compared to other two
kingfisher-like configurations. The ground effects caused
the flow complexity, presence of high rates of
45

AERODYNAMICSOFTHEHIGHSPEEDTRAINBIOINSPIREDBYAKINGFISHER

numerical calculations would become simpler and less


advanced in the selection phase. After all, design-flow
connection might be improved, and the research field
spreader, by applying experiences in a field of high-speed
aerodynamics, due to similarity of phenomenon never the
less to the end purposes.

OTEH2016
at Belgrade, 2016.
[10] *Nature History Museum, Museum exhibit of the
kingfisher prepared for the research use, 1939.
[11] Sawer,P.: Wildlife Photography/Solent, via Daily
Mail, goo.gl/B14OKP, with a written permission
[last accessed on 22/02/2016]
[12] Hereman,W.: Shallow water waves and solitary
waves, Mathematics of Complexity and Dynamical
Systems, Springer, ed Meyers R A, New York, 2012
1520-32.
[13] Ghazizade-Ahsaee,H., Nikseresht,A.H.: Numerical
Simulation of Two Dimensional Dynamic Motion of
the Symmetric Water Impact of a Wedge, IJMT, 1(1)
(2013) 11-22.
[14] Faltinsen,O.: Hydrodynamics of High-speed Marine
Vehicles Cambridge, Cambridge University Press,
New York, 2005.
[15] Raghunathana,R.S.,
Kimb,H.D.,
Setoguchi,T.:
Aerodynamics of high-speed railway train, Progress
in Aerospace Sciences, 38 (2002) 469514

ACKNOWLEDGMENTS
The authors are grateful to the Ministry of Education,
Science and Technological Development, Republic of
Serbia, for supporting this work as a part of the researches
through the financed Projects TR-35045 and TR 34028
(2011.-2016). Authors are grateful to Mr. Marko Rakovic,
biologist, curator ornithologist, and the Natural History
Museum at Belgrade for kind help, access to exibits and
knowledge, crucial for understanding the natural behavior
of the kingfisher and its numerical modeling. The authors
are grateful to Mr. Paul Sawer, wildlife photographer, and
the Solent, publisher, on a kindness to approve the use of
the artworks in this research which helped creation of the
detailed kingfisher shape.

[16] Liepmann,H.W.,
Roshko,A.:
Elements
of
Gasodynamics, Galcit Aeronautical Series, John
Willey and sons, London,1957.
[17] Dobrovolskaya,Z.N.: On some problems of
similarity flow of fluids with a free surface, Journal
of Fluid Mechanics, 36 (1969) 805-29.
[18] Puhari,M., Luanin,V., Lini,S., Mati,D.: Research
Some Aerodynamic Phenomenon of High Speed
Trains in Low Speed Wind Tunnel, Proceedings of
the 3rd International Scientific and Professional
Conference CORRIDOR 10 - A sustainable way of
integrations, October 25th, 2012., Belgrade, Serbia,
[19] Lini,S.,
Risti,S.,
Stefanovi,Z.,
Kozi,M.,
Ocokolji,G.: Experimental and Numerical Study of
Super-Critical Flow Around the Rough Sphere,
Scientific Technical Review, 65 (2) (2015) 11-19.
[20] Lucanin,V., PuharicM., Milkovic,D., Golubovic,S.,
Linic,S.: Determining the influence of an air wave
caused by a passing train on the passengers standing
at the platform, International Journal of Heavy
Vehicle Systems, 19 (3) (2012) 299-313
[21] Tae-Kyung,Kim, Kyu-Hong,Kim, Hyeok-Bin, Kwon:
Aerodynamic characteristics of a tube train, J. Wind
Eng. Ind. Aerodyn. 99 (1) (2011) 1871196.
[22] Dumont,E.R.: Bone density and the lightweight
skeletons of birds, Proc. R. Soc. B. 277 (2010) 2193
2198

References
[1] Bhushan,B.: Biomimetics: lessons from nature an
overview, Phil. Trans. R. Soc. A 367 (2009) 1445
1486.
[2] Kobayashi,K.: 2005 JFS Biomimicry Interview
Series: No.6., Shinkansen Technology Learned from
an Owl?, - The story of Eiji Nakatsu JFS Newsletter
31, goo.gl/HOZrjL [last accessed on 20/03/2015]
[3] *Ask Nature Shinkansen Train, High-speed train
silently slices through air, goo.gl/aSfpQ5 [last
accessed on 20/03/2015]
[4] McKeag,T.: Auspicious Forms: Designing the Sanyo
Shinkansen 500-Series Bullet Train, Zygote
Quarterly (2012) 14-33.
[5] Anders,J.B.: Biomimetic Flow Control, AIAA-20002543, NASA Langley Research Center. Fluid (2000),
19-22 June 2000, Denver, CO.
[6] Rauo,B.: Bionics in Design, University of Belgrade,
Belgrade, eBook on CD., (in Serbian), 2014.
[7] Linic,S.,
Rasuo,B.,
Kozic,M.,
Lucanin,V.,
Puhari,M.: Comparison of numerically obtained 2D
flow fields for the bionic high speed train concept
designs inspired with aquatic and flying animals,
Proceedings of the 6th International Scientific
Conference on Defensive Technologies - OTEH
2014, The Military Technical Institute, (2014)
Belgrade, 44-49.
[8] Linic,S., Rasuo,B., Kozic,M., Lucanin,V., Bengin,A.:
Drag-Coefficient Behavior of the Bio-Inspired High
Speed Train Design, Proceedings of the 5th
International Congress of Serbian Society of
Mechanics, Arandjelovac, June 15-17 (2015) Serbia.
[9] Personal communication with Mr. Marko Rakovic,
research assistant from the Natural History Museum

[23] *Theory- and Users-Guide of the ANSYS Fluent 12


Documentation, ANSYS Inc., 2009.
[24] Veersteg,H.K., Malalasekera,W.: An Introduction to
Computational Fluid Dynamics The Finite Volume
Method, Longman Scientific & Technical, New
York, 1995.
[25] Anderson,J.D.: Fundamentals of Aerodynamics, II
ed., McGraw-Hill, 1991.

46

OBSERVATIONS ON SOME TRANSONIC WIND TUNNEL TEST RESULTS


OF A STANDARD MODEL WITH A T-TAIL
DIJANA DAMLJANOVI
Miltary Technical Institute (VTI), Belgrade, didamlj@gmail.com
ORE VUKOVI
Miltary Technical Institute (VTI), Belgrade, vdjole@sbb.rs
ALEKSANDAR VITI
Miltary Technical Institute (VTI) (retired), Belgrade, sasavitic@gmail.com
JOVAN ISAKOVI
College of Applied Engineering Studies, Belgrade-Zemun, jisakovic@tehnikum.edu.rs
GORAN OCOKOLJI
Miltary Technical Institute (VTI), Belgrade, ocokoljic.goran@gmail.com

Abstract: As a part of a periodic health monitoring of the wind tunnel structure, instrumentation and flow quality, a
series of tests of an AGARD-C calibration model was performed in the 1.5 m T-38 trisonic wind tunnel of the Military
Technical Institute (VTI) in Belgrade. The tests comprised measurements of forces and moments in the transonic Mach
number range with the purpose of comparing the models obtained aerodynamic characteristics with those from other
wind tunnel laboratories, in accordance with an adopted procedure for standard models testing. Inter-facility
correlations were based on test results of physically the same model in the 5ft trisonic wind tunnel of the National
Research Council (later operated as National Aeronautical Establishment) of Canada, in the 1.2 m trisonic wind tunnel
of the Romanian National Institute for Scientific and Technical Creation and in the T-38 wind tunnel during the
commissioning period. Analysis of correlated test results confirmed a good flow quality in the T-38 test section, good
condition of wind tunnel structure and instrumentation, and the correctness of the data reduction algorithm. Small
differences were observed in the pitching moment coefficient data obtained in the normal and inverted model
configurations, and it has preliminary been concluded that the effect may have been caused by a slight asymmetry of
flow in the rear part of the wind tunnel test section, the AGARD-C model being known for the high sensitivity of the
pitching moment to local conditions.
Keywords: wind tunnel, transonic flow, standard model, aerodynamic characteristics.
easier to detect (from anomalies in the wind tunnel test
results) if the shock waves reflected from the walls of the
wind tunnel test section are passing too close to the rear
end of the model. The existence of the tail also makes this
model more sensitive than AGARD-B to flow curvature
in the wind tunnel test section.

1. INTRODUCTION
The Military Technical Institute (VTI) in Belgrade has
established a procedure for wind-tunnel data quality
assurance, primarily based on periodic testing of the
AGARD-B standard model [1]. The procedure comprises
wind-tunnel testing, maintenance of a database of
standard test results and a statistical control on the test
data, [2-4]. Considerations and directives recommended
in the procedure have now been applied and implemented
in testing of the AGARD standard model C with a T-tail.
The intention of the research was to start the statistical
control on the database wind-tunnel results for this model.
The obtained and analyzed results will serve to ascertain
the stability of the measurement process and help in
future tests of similar configurations.

The configuration was originally designed for calibration of


transonic wind tunnels [5-7] and primarily used in them.
Unfortunately the database of published test results is
somewhat smaller than the one for the AGARD-B model.
Beside the review data available in [7] comprising results
from AEDC, Boeing, ONERA and NLR, which,
unfortunately, is of very poor legibility, available test results
include [8] those from the Canadian 5ft trisonic blowdown
wind tunnel of the National Research Council (NRC), later
operated as National Aeronautical Establish-ment (NAE) [9],
those from the Romanian 1.2 m trisonic blowdown wind
tunnel [10,11] of the National Institute for Scientific and
Technical Creation (INCREST, now INCAS - National
Institute for Aerospace Research) and tests made during the

AGARD-C model configuration differs from the betterknown AGARD-B configuration by the addition of a rearbody segment with a T-tail. The longer body of the
AGARD model C and the existence of the T-tail make it
47

OBSERVATIONSONSOMETRANSONICWINDTUNNELTESTRESULTSOFASTANDARDMODELWITHATTAIL

commissioning of the T-38 wind tunnel of VTI [12]. The


intention of the authors is to extend the amount of data and
make it available to the community.

OTEH2016

The support sting for the AGARD-C model is identical to


the sting for the AGARD-B model, having a length of 3D
aft of model base and a diameter of 0.5D. In order to
reduce cost and produce more versatile wind tunnel
models, actual designs of AGARD-B and AGARD-C are
sometimes realized as an AGARD-B configuration to
which a body segment with the T-tail can be attached at
the rear end to form the AGARD-C configuration.

2. STANDARD MODEL WITH A T-TAIL


AGARD-C standard model, a derivative of the wellknown standard AGARD-B model, is an ogive-cylinder
with a delta wing and a horizontal and a vertical tail in the
T-tail configuration, Picture 1.

The AGARD-C wind tunnel calibration model, used in T38, was supplied by Boeing. Model size (of 115.8 mm
dia.) was chosen with respect to the tunnels test section
size. Model had been used in previous T-38 wind tunnel
calibrations and tests in other wind tunnels, and there is a
database with which to compare the obtained results [12].

3. TEST FACILITY
The T-38 test facility at the Military Technical Institute
(VTI) in Belgrade is a blowdown-type pressurized wind
tunnel [13] with a 1.5 m 1.5 m square test section,
Picture 3. For subsonic and supersonic tests, the test
section is with solid walls, while for transonic tests, a
section with porous walls is inserted in the configuration.
The porosity of walls can be varied between 1.5% and
8%, depending on Mach number, so as to achieve the best
flow quality.

Picture 1. AGARD-C model with a body diameter of


115.8 mm
At the AGARD Wind Tunnel and Model Testing Panel
meeting in Paris, France, in 1954, it was agreed [7] to add
a third model configuration to the family of AGARD
standard calibration models, by extending the body of the
AGARD-B by 1.5 diameters and by adding a T-tail.

Mach number in the range 0.2 to 4.0 can be achieved in


the test section, with Reynolds numbers up to 110 million
per meter. In the subsonic configuration, Mach number is
set by sidewall flaps in the tunnel diffuser. In the
supersonic configuration, Mach number is set by the
flexible nozzle contour, while in transonic configuration,
Mach number is both set by sidewall flaps and the flexible
nozzle, and actively regulated by blow-off system. Mach
number can be set and regulated to within 0.5% of the
nominal value.

The horizontal tail has an area equal to 1/6 of the wing


area. Sections of the vertical and horizontal tail are
circular arc profiles defined identically to the profile of
the wing. Forward of the 1.5D body extension, the
geometry of the AGARD-C model is identical to that of
the AGARD-B: an 8.5D long solid body of revolution
consisting of the 5.5D long cylindrical segment and a
nose with the length of 3D.
Also, the position of the moments reduction point (the
aerodynamic centre) is the same as on AGARD-B. All its
dimensions are given in terms of the body diameter D so
that the model can be produced in any scale, as
appropriate for a particular wind tunnel, Picture 2.

Picture 3. The T-38 test facility in VTI, Belgrade


Stagnation pressure in the test section can be maintained
between 1.1 bar and 15 bar, depending on Mach number,
and regulated to 0.3% of nominal value. Run times are in
the range 6s to 60s, depending on Mach number and
stagnation pressure.
Model is supported in the test section by a tail sting
mounted on a pitch-and-roll mechanism by which desired
aerodynamic angles can be achieved. The facility supports
both step-by-step model movement and continuous movement of model (sweep) during measurements. Positioning
accuracy is 0.05 in both pitch and roll.

Picture 2. A drawing defining the geometry of the


AGARD-C standard model and its sting fixture
48

OBSERVATIONSONSOMETRANSONICWINDTUNNELTESTRESULTSOFASTANDARDMODELWITHATTAIL

OTEH2016

Canadian NRC/NAE 5ft trisonic wind tunnel and results


[11] from the Romanian 1.2 m 1.2 m INCREST trisonic
wind tunnel facility. The early set of the T-38 wind-tunnel
test data are also correlated [12]. All results are from the
tests with the 115.8 mm dia. model.
A good agreement of the correlated test results confirms a
high quality of air flow in the T-38 test section, good
condition of the wind tunnel instrumentation and the
correctness of the data reduction algorithm, Pictures 5-8.

Picture 5. Inter-facility correlation in the drag force


measurement

Picture 4. Pitch-and-roll model support mechanism

4. RESULTS AND DISCUSSION


VTI is among the laboratories that have adopted the
practice [1] of periodic testing of a standard model every
couple of years in order to provide a continued confidence
in the reliability of measurements in their wind tunnels.
The adopted procedure for determining the overall T-38
wind-tunnel data quality and verification in the standard
testing [1] includes a number of steps, but here,
consideration is made only from the two points: (1) interfacility and (2) the test section symmetry correlations.
Comparable tests results are presented in the wind axes
system in the form of graphs, showing drag force
coefficient, lift force coefficient, pitching moment
coefficient and base pressure coefficient as function of the
angle of attack in the wind axes system. Test results are
given for the model aerodynamic center, Picture 2. Model
reference length is mean aerodynamic chord.

Picture 6. Inter-facility correlation in the lift force


measurement

4.1. Sets of VTI test results


A small set of early AGARD model C test data obtained
during the commissioning of the T-38 wind tunnel existed
in the VTI database, as the model B is being more often
used for wind tunnel calibration and verification, [1,14].
Tests had been performed at the transonic Mach numbers
0.7 to 1.05 at angles of attack from 2 to +13 traversed
in a continuous-movement mode.

Picture 7. Inter-facility correlation in the pitching


moment measurement

An additional test campaign with this model in the T-38


wind tunnel was executed later, comprising measurements
at Mach numbers 0.7 to 1.15. The angle of attack range
was 4 to +10 traversed in a continuous-movement
mode.

4.2. Inter-facility correlation


The test data were analyzed based on correlations with
data from other experimental aerodynamics laboratories.
The reference sets of data were results [8] from the

Picture 8. Inter-facility correlation in the base pressure


measurement
49

OTEH2016

OBSERVATIONSONSOMETRANSONICWINDTUNNELTESTRESULTSOFASTANDARDMODELWITHATTAIL

Analysis of the measured aerodynamic coefficients from


the point of test section symmetry was done for two Mach
0.7 runs at the two opposite roll angles: 0 (modelupright), Picture 9, and 180 (model-inverted), Picture 10.

The large deviation of the pitching moment coefficient


obtained in the INCREST wind tunnel is obviously a
result of the reduction of the moments to a reference point
different from that used in other tests. Unfortunately, the
location of the reference point used in INCREST tests is
not known, so the data could not be recomputed for the
proper reference point.

Mach 0.7 data in wind axes system at the aerodynamically


same angles of attack from model-upright and modelinverted runs were compared. CAD renderings of the
115.8 mm dia. AGARD-C model in the both upright and
inverted configurations in the T-38 test section, given in
Pictures 9 and 10, present aerodynamically the same +10
angle of attack in the wind axes system.

Relatively large scatter of the base pressure coefficients in


the correlations (Picture 8) can probably be explained by
the fact that absolute pressure transducers of high range
and, therefore, lower absolute accuracy, were used for the
base pressure measurement in the NRC/NAE and in early
VTI tests, while the newer VTI tests were performed
using a differential pressure transducer of a lower range
and higher accuracy. The method of base-pressure
measurement in the INCREST wind tunnel is not known.

Comparison of test data obtained in these two runs


showed that there was no noticeable effect on the lift and
the drag aerodynamic coefficients. It was seen that the
magnitude of differences in the force (drag and lift,
Pictures 11 and 12) and pressure measurements (Picture
13) that can be attributed to asymmetry of the test section
are comparable to the total measurement uncertainties [1]
confirming a good symmetry of the test section. It should
be noted that the test section was calibrated and that the
determined flow angularities in vertical and horizontal
planes were up-to-date.
Forebody drag force coefficient

It should also be noted that, because of the smaller size of


the INCREST wind tunnel relative to NCR and VTI wind
tunnels (1.2 m vs. 1.5 m) the blockage of the AGARD-C
model in tests [11] was higher than in VTI and NAE tests,
which may have affected the results, including the
measurement of the base pressure.

4.3. Correlation from the point of symmetry


The test data were analyzed from the point of test section
symmetry, more exactly test results were checked from
runs with model both in upright and inverted position.

0.14
0.12
AGARD-C model, Mach 0.7
Porous wall test section
T-38 (model upright)
T-38 (model inverted)

0.10
0.08
0.06
0.04
0.02
0.00
-6

-4

-2

10

12

Angle of attack

Picture 11. Correlations from the point of test section


symmetry in the drag force measurement
Lift force coefficient

0.8

Picture 9. CAD rendering of the 115.8 mm dia. AGARDC model, upright configuration, T-38 test section, +10
angle of attack

AGARD-C model, Mach 0.7


Porous wall test section
T-38 (model upright)
T-38 (model inverted)

0.6
0.4
0.2
0.0
-0.2
-0.4
-6

-4

-2

10

12

Angle of attack

Base pressure coefficient

Picture 12. Correlations from the point of test section


symmetry in the lift force measurement
-0.12
-0.13
-0.14
-0.15
-0.16
-0.17

AGARD-C model, Mach 0.7


Porous wall test section
T-38 (model upright)
T-38 (model inverted)

-0.18
-0.19
-0.20
-6

-4

-2

10

Angle of attack

Picture 10. CAD rendering of the 115.8 mm dia.


AGARD-C model, inverted configuration, T-38 test
section, +10 angle of attack

Picture 13. Correlation from the point of test section


symmetry in the base pressure measurement
50

12

OBSERVATIONSONSOMETRANSONICWINDTUNNELTESTRESULTSOFASTANDARDMODELWITHATTAIL

Contrary to the drag coefficient data and lift coefficient


data, there is some difference between the pitching
moment coefficients obtained in the normal and inverted
positions (Picture 14). It is known [7] that the existence of
the T-tail makes the AGARD-C model very sensitive
(more than the model B) to flow curvature in the test
section. Indeed, comparable differences were not
observed in the tests of the shorter AGARD-B model in
VTI. Therefore, the differences in the pitching moment
can be attributed to a small amount of flow curvature in
the rear part of the transonic test section, possibly caused
by the asymmetry of the model support mechanism
(Picture 9 and Picture 10). Further investigation of the
observed effect is indicated and the feasibility is being
considered of eliminating this asymmetry by differential
porosity setting of the upper and lower wall of the
downstream part of the transonic test section that is
located in the model cart of the T-38 wind tunnel.

It will also be of interest to correlate the obtained data


with those from the large subsonic wind tunnel of VTI,
where the standard model C has also recently been tested
following the adopted procedure. Therefore, the future
tests of the AGARD-C model in the T-38 wind tunnel
should encompass a wider span of Mach numbers, not
only in the transonic range, but in the subsonic range as
well. It is expected that, after the introduction of a new T38 wind tunnel control system and fine tuning of all the
wind tunnel systems and subsystems, a new set of data
will be referential for future correlations and verifications.

References
[1] Damljanovic, D., Isakovic, J., Rauo B., "T-38 WindTunnel Data Quality Assurance Based on Testing of
a Standard Model", Journal of Aircraft, 50 (4)
(2013) 1141-1149.
[2] Recommended Practice: Calibration of Subsonic and
Transonic Wind Tunnels, AIAA-R093-2003, AIAA.
[3] Reed T.D., Pope T.C., Cooksey J.M., Calibration of
Transonic and Supersonic Wind Tunnels, NASA
Contractor Report 2920, Vought Corporation, 1977.
[4] Hemsch M., Grubb J., Krieger W., Cler D., "Langley
Wind Tunnel Data Quality Assurance: Check
Standard Results", AIAA 2000-2201, Proceedings of
the 21st AIAA Advanced Measurement Technology
and Ground Testing Conference, 2000.
[5] Specification for AGARD Wind Tunnel Calibration
Models, AGARD memorandum, AGARD, 1955.
[6] Wind Tunnel Calibration Models, AGARD Specification 2, AGARD, 1958.
[7] Hills R. (ed.), A Review of Measurements on AGARD
Calibration Models, AGARDograph 64, Aircraft
Research Association Bedford, England, 1961.
[8] Report on Tests Conducted on NACA 0012, and
AGARD-B and C Models in the NAE 5 ft Blowdown
Wind Tunnel During Training of VTI Personnel:
Nov-Dec 1981, DSMA Rept. No. 4001/R84, 1983.
[9] NRC-CNRC Information; Aerodynamics: 1.5m
1.5m Trisonic Blowdown Wind Tunnel, IAR-AL03e
National Research Council Canada, 2005.
[10] Munteanu, F., "INCAS Trisonic Wind Tunnel",
INCAS-Bulletin, No.1/2009, INCAS National Institute for Aerospace Research Romania, 2009.
[11] The calibration of the transonic and supersonic test
section using the AGARD model B and C, Report:
RL-ST-14, National Institute for Scientific and
Technical Creation, Bucharest, Romania, 1979.
[12] Isakovic, J., Zrnic, N., Janjikopanji, G., "Testing of
the AGARD B/C, ONERA and SDM Calibration
Models in the T-38 1.5m 1.5m Trisonic Wind
Tunnel", Proceedings of the 19th ICAS congress,
1994, pp. 19.
[13] Elfstrom, G.M, Medved, B., "The Yugoslav 1.5m
Trisonic Blowdown Wind Tunnel", Paper 86-0746CP, AIAA, 1986.
[14] Damljanovic, D., Vitic, A, Vukovic, ., Isakovic J.,
"Testing of AGARD-B calibration model in the T-38
Trisonic Wind Tunnel", Scientific Technical Review,
56 (2) (2006) 52-62.

Pitching moment coefficient

0.08
AGARD-C model, Mach 0.7
Porous wall test section
T-38 (model upright)
T-38 (model inverted)

0.07
0.06
0.05
0.04
0.03
0.02
0.01
0.00
-6

-4

-2

10

12

Angle of attack

Picture 14. Correlations from the point of test section


symmetry in the pitching moment measurement

Pitching moment coefficient

For comparison, the available data on the differences in


pitching moment coefficient for the normal and inverted
model position in the NAE/NRC 5ft wind tunnel are
given in Picture 15.
0.11
0.10
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0.00

AGARD-C model, Mach 0.7


Porous wall test section
NRC/NAE (model upright)
NRC/NAE (model inverted)

-4

-2

10

12

OTEH2016

14

Angle of attack

Picture 15. Correlation from the point of test section


symmetry in the pitching moment measurement in the
Canadian facility

5. CONCLUSION
The intention of the research was to start the statistical
control on standard AGARD-C model test data in
accordance with the procedures adopted in VTI and to
expand the relatively meagre published reference data.
There is a need for further tests of this model in order to
investigate the preliminary conclusions related to the
pitching moment coefficient. Simulations performed by
computational fluid dynamics software tools can be
helpful in understanding the observed phenomena.
51

NUMERICAL AND EXPERIMENTAL ASSESSMENT OF TRANSONIC


TURBULENT FLOW AROUND ONERA M4 MODEL
JELENA SVORCAN
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, jsvorcan@mas.bg.ac.rs
DIJANA DAMLJANOVI
Military Technical Institute, Belgrade, didamlj@gmail.com
DRAGAN KOMAROV
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, dkomarov@mas.bg.ac.rs
SLOBODAN STUPAR
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, sstupar@mas.bg.ac.rs
NEBOJA PETROVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, npetrovic@mas.bg.ac.rs

Abstract: Experimental investigation of transonic flow at Mach number of 0.84 around a subsonic/transonic transport
calibration model ONERA M4 at four different angles-of-attack has been conducted in T-38 trisonic blow-down wind
tunnel of the Serbian Military Technical Institute. Experimental results include relative pressure and pressure
coefficient distributions along three wing sections. Obtained aerodynamic performance data present a good basis for
CFD studies. Both experimental and numerical set-ups and processes are briefly described. Numerical simulations are
performed in ANSYS FLUENT 16.2 with several different turbulence models employed. Two different sets of results are
presented and compared. Pressure distribution along the wing surface, sonic bubble on the wing suction side and wingtip vortex have been investigated in more detail and presented in form of contours and iso-surfaces.
Keywords: transonic flow, experiment, CFD, turbulence models, wing-tip vortex.

1. INTRODUCTION
Acquiring accurate numerical data (on boundary layer
transition characteristics at transonic speeds) to
supplement experimental data can be extremely useful for
possible improvement of performances at cruise regimes.
Numerical experiments may present a powerful and
inexpensive tool that can be and is used for development
of improved wing-body designs with increased
aerodynamic performances or reduced fuel burn, noise
and emissions [6, 8].

At transonic flow conditions many flow phenomena such


as pressure fluctuations, shock-boundary layer interaction,
etc. appear which make measuring and simulating these
types of flows difficult [1, 2]. At high subsonic speeds,
sonic bubbles (supersonic regions) terminating in shock
waves appear around the airfoil. At sufficiently large
pressure rise across the shock, shock-induced separation
of the boundary layer occurs. As stated in [2], for a
turbulent boundary layer, this starts when the local Mach
number just upstream of the shock ranges in 1.25-1.3.

Furthermore, in most cases, experimental data cannot


provide a complete insight into the complex transonic
flow field. Visualizing different flow regions around the
body (sonic zones, transition or separation zones, wing-tip
vortex, etc.) can help in better understanding of the
complex flow physics.

Additional difficulties in performing these experiments


arise from the existence of boundaries in wind tunnels.
For that reason, ONERA has built a number of standard
models for use in evaluating Reynolds number and
blockage effects in various wind tunnels [3]. An extensive
collection of aerodynamic performance data has been
compiled over the years [3-5].

The analysis presented in the paper focuses on transonic


flow field around the wing of the standard transport
calibration model ONERA M4. Experimentally obtained
pressure distributions along wing cross sections are
compared and complemented by numerically obtained
pressure distributions, Mach number distributions,
vorticity fields, etc. at different angles-of-attack (AoA).

Although measurements of flows around standard models are


primarily done for the purpose of validation of the testing
facility and empirical correction procedures, they can also
serve for the establishment of guidelines for the development
of a sufficiently accurate numerical set-up and the assessment
of numerical models (e.g. turbulence models) [6,7].
52

NUMERICALANDEXPERIMENTALASSESSMENTOFTRANSONICTURBULENTFLOWAROUNDONERAM4MODEL

OTEH2016

2. EXPERIMENT
Experimental part of the presented research has been
performed in T-38 trisonic blow-down wind tunnel of the
Serbian Military Technical Institute. Dimensions of the
test section are 1.5m 1.5m. In this wind tunnel, Mach
numbers in the range 0.2 to 4 and Reynolds numbers up
to 110 million per meter are achievable. For transonic
tests, a section with porous walls is inserted in the wind
tunnel configuration.
The values of flow quantities measured in the wind tunnel
and used in numerical simulations are given in Table 1.
Table 1. Transonic flow conditions, M = 0.84
p0 [bar]

p [bar]

q [bar]

T0 [K]

MRe

V [m/s]

1.524

0.960

0.474

282.6

2.52

265.1

Figure 2. Model in the wind tunnel


The stagnation pressure in the test section was measured
by a Mensor quartz bourdon tube absolute pressure
transducer pneumatically connected to a pitot probe in the
settling chamber of the wind tunnel. Range of this
transducer used was 7bar. The static pressure in the test
section was measured by a Mensor quartz bourdon tube
absolute pressure transducer pneumatically connected to
an orifice on the test section sidewall. The range of this
transducer was 1.75bar. The nonlinearity and hysteresis of
the transducers used were typically 0.02% F.S.

Subsonic/transonic transport calibration model ONERA


M4 is a well-known, standard configuration used to
investigate wind tunnel transonic performance [5]. It
consists of a fuselage with wing and tail surfaces.
Fuselage is shaped like an ogive in the fore part (x/L <
0.2935), followed by a cylinder (0.2935 x/L < 0.6654)
and ending in an elliptic boat tail. Wing and tail cross
sections are the same: "peaky" type symmetric with a
maximal relative thickness of 10.5% occurring at the
37.5% chord location (fig. 1). Wings are swept, tapered,
untwisted with a setting angle of 4 and a dihedral angle
of 3.

The stagnation temperature was measured by a RTD


probe in the settling chamber. The accuracy of this
transducer was approximately 0.5K.
Pressure distribution was measured using an
electromechanical scanning device with a solenoid-driven
valve. A piezoresistive differential pressure transducer
was used to measure local pressures.
The data acquisition system consisted of a Teledyne 64
channel front end controlled by a computer. The frontend channels for flow parameters transducers were set
with 10Hz, fourth-order low pass Butterworth filters and
appropriate amplification. The front-end channel for
pressure transducer was set with 100Hz low-pass filter of
the same type.
The scanning device operates by rotating a valve that
connects mode-pressure measuring ports sequentially to a
single differential pressure transducer. The main benefit
of this concept is that many local pressures could be
measured with a single transducer, which greatly
simplified the test installation, put less load on the data
acquisition system, etc. Main drawbacks lie in the fact
that the pressures were not measured simultaneously and
in the fact that after each port-switching of the rotating
valve, certain time was required for the pressure to settle
in the tubing. The pressure-settling time effectively
limited the scanning speed to about 10ports/second,
although there was mechanically capability of scanning at
about 40ports/second.

Figure 1. Model geometry


Pressure measurements have been conducted along three
different wing sections denoted by S1, S2 and S3 in fig. 1.
Sections S1 and S3 are parallel to the fuselage axis, while
section S2 is normal to the wing line of maximum
thickness (located at the 37.5% of the chord). Measuring
equipment was built into the model. Numbers of ports
distributed along the sections are as follows: 39 ports per
S1 (27 at the suction and 12 at the pressure side), 41 (29 +
12) per S2 and 39 (27 + 12) per S3.
In the test section the model is supported by a tail sting
mounted on a pitch-and-roll mechanism by which
different angles-of-attack can be achieved, fig. 2. Here,
four different angles-of-attack were considered: -3.3,
-1.57, 0.2 and 2.06.

3. NUMERICAL SET-UP
Since the quality of the computational grid is extremely
important, the meshes were created according to general
53

NUMERICALANDEXPERIMENTALASSESSMENTOFTRANSONICTURBULENTFLOWAROUNDONERAM4MODEL

recommendations of the AIAA High Lift and Drag


Prediction Workshops [9, 10]. Generated meshes are built
around a half-model. The support system is not taken into
account. It probably should be done in later stages of the
investigation for better results. The surrounding domain is
in the form of a half-sphere, with the far-field boundary
located approximately 100 cref around the model
(R = 11m).

OTEH2016

symmetry plane.

4. RESULTS AND DISCUSSION


Prior to the discussion, it should be repeated that the wing
angle-of-attack differs from the global AoA by the value
of the wing setting angle of 4. For that reason, wing AoA
are actually 0.7, 2.43, 4.2 and 6.06. Given that the last
value is quite high for the investigated flight regime, it is
not surprising that the numerical results diverge from
experimental data at higher AoA. Other authors also
confirmed that while at lower angles-of-attack, even with
URANS models (SA turbulence model), at meduim-sized
meshes (around 2 million cells), good prediction can be
achieved, results at higher AoA tend to deteriorate [10].

As a part of the grid convergence study, a family of


parametrically similar meshes was generated (same
topology, same stretching factors). They are all hybrid
unstructured and three-dimensional. Coarse grid (C)
numbers around 860.000 cells, medium (M) - around
1.450.000 cells, and fine (F) - around 2.600.000 cells. Thirtyforty layers of prismatic cells exist around the model walls
resulting in dimensionless wall distance around 3 and 1, y+
3 and y+ < 1, for the coarse and fine grid respectively. Grid
spacing normal to symmetry plane is larger.

4.1. Pressure distribution and aerodynamic


coefficients

The mesh along the wing surfaces is mapped-faced with a


biased chord-wise distribution of element size, fig. 3.
Cells around the leading and trailing edge are
approximately 0.5%c for medium mesh. Wing trailing
edge is modeled as blunt.

Figs. 4-6 illustrate relative pressure distributions along the


three cross-sections for different turbulence models and
angles-of-attack.

Figure 3. Computational F-grid on the wing


Numerical simulations were performed in ANSYS
FLUENT 16.2 where governing flow equations for
compressible, viscous fluid were solved by finite-volume
method. Reynolds-averaged Navier-Stokes (RANS)
equations were closed by a one-equation Spalart-Allmaras
(SA) and two-equation k- SST (kwSST) turbulence
models. SA model incorporates modified turbulent
viscosity equation, while kwSST model presents a
combination of standard k- model near the walls and k-
model in the outer layer. Fluid, air, was considered as
ideal gas whose dynamic viscosity changes according to
the Sutherland law.

Figure 4. Relative pressure distribution at cross-section


S1 at AoA = -1.57; - SA on M-mesh, -- kwSST on C-mesh

Density-based implicit solver was used. Spatial


discretizations were of the second order. Gradients were
obtained by the least squares cell-based method. CourantFriedrichs-Lewy (CFL) number was set to 5. Numerical
simulations were performed until fluctuations of
aerodynamic coefficient became less than 0.1%.
No-slip boundary conditions were assigned to all model
surfaces. Dirichlet boundary conditions concerning
velocity and pressure were imposed on far-field boundary,
while zero Neumann boundary conditions (zero normal
velocity and normal gradients) were defined on the

Figure 5. Relative pressure distribution at cross-section


S2 at AoA = 0.2; - SA on F-mesh, -- SA on M-mesh
Although refining the mesh increases the accuracy of
numerical results, the end of sonic zone (terminating in a
54

NUMERICALANDEXPERIMENTALASSESSMENTOFTRANSONICTURBULENTFLOWAROUNDONERAM4MODEL

OTEH2016

shock wave) is hard to simulate on all meshes by the


RANS equations. This is particularly evident on higher
AoA at cross-section S1 near the wing tip.
Cross-section S3, nearest to the fuselage, seems to be
simulated most accurately.

Figure 8. Relative pressure contours along the model at


AoA = -1.57 at M-mesh, SA model

Figure 6. Relative pressure distribution at cross-section


S3 at AoA = -3.3; - SA on M-mesh, -- kwSST on C-mesh
Although discrepancies between the two sets of data exist,
global numerical data (in the form of normal force
coefficient presented in fig. 7) agree well with
experimental data obtained on ONERA model M5 at the
same Mach and Reynolds numbers at different wind
tunnels [3]. Both models possess the same geometrical
features, with the M5 model being somewhat bigger with
the fuselage length of 1.058m. Numerical results
presented in fig. 7 were obtained on M-mesh with SA
turbulence model.

Figure 9. Pressure coefficient contours along the wing at


AoA = -3.3 and 0.2 at M-mesh, SA model

4.2. Velocity field


Flow field around the model can also be presented by
Mach number contours, figs. 10 and 11. Variations in the
shape of the sonic bubble appearing primarily at the
suction side of the wing nicely illustrate the change of
aerodynamic loading with the increase of AoA, fig. 12.

Figure 7. Normal force coefficient Cn for the whole


model at different AoA, experimental data taken from [3]
Relative pressure contours over the whole model and
pressure coefficient contours over the wing are presented
in figs. 8 and 9 respectively. At both AoA = -1.57 and
AoA = 0.2 the superposition/composition of shock
waves (both weak and strong) appearing at the suction
side of the wing is well illustrated.

Figure 10. Mach number contours at three cross-planes at


AoA = -3.3 and -1.57 at M-mesh, SA model

55

NUMERICALANDEXPERIMENTALASSESSMENTOFTRANSONICTURBULENTFLOWAROUNDONERAM4MODEL

OTEH2016

Figure 11. Mach number contours at three cross-planes at


AoA = 0.2 and 2.06 at M-mesh, SA model

4.3. Wing tip vortex


As a result of pressure differences between wing pressure
and suction sides a vortex trails downstream from the
wing tip. The study of this effect is important since these
vortices can be the source of additional drag, noise and
hazard [7]. Due to significant velocity and pressure
gradients, it is still difficult to accurately predict wing tip
vortices numerically.
Fig. 12 illustrates the vortices appearing one-, two- and
three-chords behind the wing at different AoA in the form
of x-component of vorticity, x, obtained on M-mesh by
SA model. Although employed turbulence model greatly
affects the presented results since turbulent kinetic energy
distribution directly defines vortex shape and size (its
maximum should be at the core center), this effect can
sufficiently accurately be captured by isotropic turbulence
models [7].
Angle-of-attack significantly influences the generation of
vortices. At higher angles-of-attack, the vortices occur not
only because of the wing tip but also due to the complex
composition of sonic bubbles terminating in shock waves
appearing at the suction side of the wing (i.e. mixed
shock/angle-of-attack induced separation occurring near
the wing tip).

Figure 12. Sonic bubble at the suction side of the wing


and x-vorticity x [s-1] in three planes behind the wing at
AoA = -3.3, -1.57, 0.2 and 2.06 respectively
Vortices circulate counterclockwise as viewed from the
front. At higher AoA vortices decay more slowly and
significantly broaden. The existence of wing tip vortices
is captured 3 chords downstream.

56

NUMERICALANDEXPERIMENTALASSESSMENTOFTRANSONICTURBULENTFLOWAROUNDONERAM4MODEL

OTEH2016

Sciences, 37 (2001) 147-196.


[3] Binion, T. W. Jr., "Tests of the ONERA calibration
models in three transonic wind tunnels", AEDC-TR76-133, Arnold Air Force Station, Tennessee, 1976.
[4] Nakakita, K., Kurita, M., Mitsuo, K., "Development
of the pressure-sensitive paint measurement for large
wind tunnels at Japan Aerospace Exploration
Agency",
24rd
International
Congress
of
Aeronautical Sciences, Yokohama, Japan, 2004.
[5] Yoshida, K., Ueda, Y., Noguchi, M., "Experimental
and numerical analysis on lift and transition
characteristics of ONERA-M5 configuration model",
25th International Congress of Aeronautical
Sciences, Hamburg, Germany, 2006.
[6] Kroll, N., Abu-Zurayk, M., Dimitrov, D., Franz, T.,
Fuhrer, T., Gerhold, T., et al., "DLR project DigitalX: towards virtual aircraft design and flight testing
based on high-fidelity methods", CEAS Aeronaut J, 7
(2016) 3-27.
[7] antrak, . S., Kushner, L. K., Heineck, J. T., "Timeresolved stereo PIV investigation of the NASA
Common Research Model in the NASA Ames Fluid
Mechanics Laboratory 32- by 48-in indraft wind
tunnel", Center for Turbulence Research Annual
Research Briefs, (2014) 179-191.
[8] Deere, K. A., Luckring, J. M., McMillin, S. N.,
Flamm, J. D., "CFD Predictions for Transonic
Performance of the ERA Hybrid Wing-Body
Configuration", AIAA Science and Technology
Forum and Exposition, San Diego, CA, 2016.
[9] Vassberg, J. C., "A Unified Baseline Grid about the
Common Research Model Wing-Body for the Fifth
AIAA CFD Drag Prediction Workshop", 29th AIAA
Applied Aerodynamics Conference, Honolulu, HI,
2011.
[10] Ochi, A., Shima, E., "A Hybrid Unstructured Grid
System for Viscous and Inviscid Aerodynamic
Analysis", 23rd International Congress of
Aeronautical Sciences, Toronto, Canada, 2002.

5. CONCLUSION
A comparative experimental and numerical investigation
of transonic flow around ONERA M4 standard model has
been performed. Satisfactory coincidence between locally
measured and computed pressure distributions has been
achieved.
Numerical simulation is an important tool in aircraft
design since it requires less time and resources than other
investigations producing similar scope of results. It
complements wind tunnel testing and provides valuable
additional results and fluid flow visualization. However,
there are still many limitations, especially in flight
regimes such as transonic, and additional investigations
must be performed. Good practice to validate numerical
set-up and make sure that the obtained computed results
are in expected range of accuracy is through comparison
with the experimental data for calibration models.
Transonic regimes are extremely important for
contemporary airliners. Sonic zones terminating in shock
waves and wing tip vortices can significantly increase
drag and deteriorate aerodynamic performances. Any
attempt made in their successful simulation can lead to
improved aircraft design. Therefore, despite the fact that
the steady RANS models used in the study are not able to
completely capture identified complex flow phenomena,
presented numerical study could be a good basis for more
detailed and precise further investigations.

ACKNOWLEDGEMENT
The paper is a contribution to the research TR 35035
funded by the Ministry of Education, Science and
Technological Development of the Republic of Serbia.

References
[1] Takakura, Y., Ogawa, s., Wada, Y., "Transonic windtunnel flows about a fully configured model of
aircraft", AIAA Journal, 33(3) (1995) 557-559.
[2] Lee, B. H. K., "Self-sustained shock oscillations on
airfoils at transonic speeds", Progress in Aerospace

57

COMPUTATIONAL ANALYSIS OF HELICOPTER MAIN ROTOR BLADES


IN GROUND EFFECT
ZORANA TRIVKOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, zposteljnik@mas.bg.ac.rs
JELENA SVORCAN
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, jsvorcan@mas.bg.ac.rs
MARIJA BALTI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, mbaltic@mas.bg.ac.rs
DRAGAN KOMAROV
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, dkomarov@mas.bg.ac.rs
VASKO FOTEV
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, vfotev@mas.bg.ac.rs

Abstract: Numerical investigation of an isolated representative helicopter main rotor has been performed in ANSYS
FLUENT 16.2. In general, flow field around the rotor is unsteady, three-dimensional, complex and vortical. Such a
simulation requires substantial computational resources. Ground effect, which improves the aerodynamic performances
of the rotor, represents an additional challenge to numerical modeling. In this study, flow field is computed by Unsteady
Reynolds Averaged Navier-Stokes (URANS) equations. Both Frame of reference and Sliding mesh approaches were
employed to model the rotor rotation. Obtained results are compared to results obtained by simpler, sufficiently reliable
models such as Momentum Theory (MT) and Blade Element Momentum Theory (BEMT). Presented results include fluid
flow visualizations in the form of pressure, velocity and vorticity contours and the values of aerodynamic coefficients.
Keywords: helicopter, rotor, RANS, ground effect, power coefficient.
field is quite irregular over the rotor disc, and variations
of flow quantities per angular coordinate also exist.

1. INTRODUCTION
Flow field around helicopter blades is highly complex [1-6].
Although many flow phenomena are present (e.g.
unsteadiness, tip vortex formation, 3D dynamic stall, bladevortex interaction, shock/boundary-layer interaction etc.) [5],
because of the unique characteristics of helicopters,
extensive experimental and numerical research is constantly
being conducted for the purpose of improvement of their
aerodynamic performances. Several world-wide projects,
e.g. HELISHAPE, HART II, GOAHEAD, have been
performed in the last two decades [2-6].

Another interesting phenomenon is that the thrust of a


helicopter rotor, operating at constant power, increases as
it approaches the ground since the development of the
rotor wake is constrained [1, 7]. Such an effect is
extremely
important
when
determining
rotor
performances and has been studied both experimentally
and numerically [7-9] but is still not fully understood [1].
Because of the great complexity of the problem in question,
several simplifications of the study had to be adopted.
Although aspect ratio is high for helicopter blades, they were
considered rigid. Coning angle of the blades (as a
consequence of mutual aerodynamic and inertial loads) was
neglected. Collective pitch, necessary for achieving different
values of thrust, was estimated by BEMT. The effect of
helicopter fuselage was implicitly included in forward flight
computation through necessary values of angle-of-attack.
Unfortunately, these limitations make comparison to
experimental data more difficult and less accurate.

Since hover is the basic, simplest and most important


flight regime of a helicopter, hover performances are an
extremely important issue in the helicopter design process
("a dimensioning condition" as stated in [3]). Obtaining a
usable, sufficiently accurate numerical solution is a
difficult task [3]. Several models different in complexity
exist, that can be used for simulation of axis-symmetric
flow field in hover and vertical flight. They include
momentum models, combined blade-element-momentum
models and full Navier-Stokes equations capable of
capturing the changes of flow quantities along the blade.

The paper is structured as follows. A short description of


the representative rotor used in computations is given in
the next section. This is followed by a short description of
the used analytical and numerical models, and adopted
numerical set-up. Results for both hover and forward

Compared to hover, forward flight regime is even more


complicated. Rather than being axis-symmetric, the flow

58

OTEH2016

COMPUTATIONALANALYSISOFHELICOPTERROTORINGROUNDEFFECT

flight condition are presented in the section "Results and


discussion". In the end, short concluding remarks are
given.

3.1. Momentum Theory (MT)


Detailed equations can be found in [1]. Conservation laws
are applied in a quasi-one-dimension integral formulation
to a control volume surrounding the rotor and its wake.
This simple approach enables a first level analysis of the
rotor thrust and power without having to consider blade
characteristics. In modified MT, actual power required to
hover presents the sum of induced and profile power:

2. MODEL DESCRIPTION
The representative model of a main helicopter rotor was
taken from [2] where a description of a conducted
experimental investigation on performances of two
different rotors, baseline and BERP-type, in the Langley
Transonic Dynamics Tunnel (TDT) can be found.
Experimental data, obtained in hover and forward flight
over a nominal range of advance ratios from 0.15 to
0.425, were used for validation of numerical models.

CP = CPi + CP0 =

CT3/ 2 Cd0
2

(1)

Power can be obtained from required power coefficient as


P = CP2R5, and thrust as T = CT2R4 where is air
density and rotor angular velocity.

Representative main rotor model consists of 4 rectangular,


1.428m long blades. Blades use two U.S. airfoils: 10%
thick RC(4)-10 (r/R 0.84) and scaled 8% thick RC(3)08 (r/R 0.866), fig. 1. A smooth transition is made
between these two different airfoil shapes. Solidity of the
rotor is = 0.101. Twist distribution is triple linear, fig. 2.

In forward flight, due to forward speed and existence of


fuselage, additional members appear:
CP = CT +

Cd 0
8

(1 + 4.65 2 ) + 12 Af 3 .

(2)

Although this model assumes uniform distribution of flow


quantities across disc rotor, by "correctly" estimating the
values of induced power correction factor , section
profile drag coefficient Cd0 and fuselage equivalent
wetted area f, it is possible to obtain sufficiently accurate
estimations of required power coefficient. Here, these
values were adopted in accordance with available
experimental data.
By replacing the rotor by a simple source with an image
source to simulate the ground effect [7], for constant
power, ratio of thrust in (IGE) and out of ground effect
(OGE) in hover can be presented by the corresponding
ratio of induced velocities. Assuming uniform
distributions over the rotor area, the equation becomes:

Figure 1. Main rotor model

TIGE
1
=
TOGE P = const . 1 ( R 4 z ) 2

(3)

where R is rotor radius and z is rotor height off the


ground.

Figure 2. Twist distribution


Nominal test conditions are defined by advance ratio ,
tip Mach number MT (constantly kept at 0.628), rotorshaft angle-of-attack and blade collective pitch angle .
Since the values of the last two angles were not known,
they were estimated by BEMT. In hover ( = 0), data
were obtained at z/d = 0.83 where z is the distance from
wind-tunnel floor to rotor hub. Numerical simulations
were also performed for another, smaller relative distance
z/d = 0.25 and the two sets of data were compared.

3.2. Blade Element Momentum Theory (BEMT)


This hybrid approach combines the basic principles from
both the blade element and momentum theory. As a
result, if the blade twist distribution is known, it is
possible to solve the induced velocity distribution.
Afterwards, with the known airfoil aerodynamic
characteristics, it is possible to estimate thrust and power
increments along the blade. However, these integrals
become quite complicated for forward flight and here this
approach was used only for hover.

3. NUMERICAL APPROACH
As previously stated, since no set of data is complete and
can provide detailed information in a short amount of
time, several different analytical and numerical
approaches, ranging from fast approximate to highfidelity, were applied.

Necessary airfoil aerodynamic characteristics were taken


from [10, 11].
To account for the ground effect in hover, the BEMT
results were corrected according to [1]:
59

OTEH2016

COMPUTATIONALANALYSISOFHELICOPTERROTORINGROUNDEFFECT

CP = kG CPi + CP0 , kG =

while the forward flight condition is better represented by


moving meshes since periodic unsteadiness during one
rotation can be more accurately captured. Here, both
approaches were used.

1
. (4)
0.9926 + 0.0379(2 R z ) 2

3.3. Unsteady Reynolds Averaged Navier-Stokes


equations (URANS)

Dirichlet boundary conditions concerning velocity and


pressure were imposed on inlet and outlet boundaries. Noslip boundary conditions were defined on blade and floor
surfaces. Angular velocity of 149rad/s, resulting in fixed
MT, was assigned to rotor zone.

Since two different distances from ground were


considered (z/d = 0.25 and z/d = 0.83) at several different
thrust coefficients (different pitch distributions), several
different computational grids had to be created. All
generated meshes are unstructured, three-dimensional and
prismatic extending from -0.83 (-0.25) to 1.5 rotor
diameters along the z-axis and 5 blade lengths in both xand y-directions, fig. 3. They contain two fluid zones,
rotor and stator, and a total number of cells of
approximately 2 million. This number was adopted after a
grid convergence study. Meshes are additionally refined
around the blades, fig. 4. Dimensionless wall distance
around the blades is below 5, y+ < 5. Trailing edge of the
blades is modeled as blunt.

Numerical simulations were performed in ANSYS


FLUENT 16.2 where governing flow equations for
compressible, viscous fluid were solved by finite-volume
method. Unsteady Reynolds-averaged Navier-Stokes
(URANS) equations were closed by a two-equation k-
SST turbulence model. Fluid, air, was considered as ideal
gas whose dynamic viscosity changes according to the
Sutherland law.
Pressure-based coupled solver was used. Gradients were
obtained by the least squares cell-based method. Spatial
discretizations were of the second order. Where needed
(unsteady simulations of isolated rotor in forward flight),
temporal discretization was of the first order. CourantFriedrichs-Lewy (CFL) number was in the range 1-5.

4. RESULTS AND DISCUSSION


Obtained results are grouped according to the flight
condition. Rotor hover performance was computed at 5
different collective angles. Tip Mach number was kept
constant in all performed simulations.

4.1. Hover
Obtained relations between thrust coefficient CT and
required power coefficient CP at z/d = 0.83 and z/d = 0.25
are presented in figs. 5 and 6 respectively.

Figure 3. Example of a generated mesh; blue - pressure


inlet, red - pressure outlet, black - ground, yellow interface between rotor and stator

Figure 4. Mesh along the blade surfaces


In order to decrease the number of cells and better resolve
the fluid flow it is customary to generate only a part of the
mesh around a single blade and define periodic boundary
conditions at the sides. This approach is particularly
applicable in axis-symmetric hover and vertical flight
conditions. However, since in this study forward flight
condition was also considered, complete meshes were
generated and used for both flight cases.

Figure 5. CP = f(CT) at z/d = 0.83


Although experimental data is available only for z/d =
0.83, all numerical models clearly capture the increase in
aerodynamic performances in ground vicinity, i.e. for the
same thrust less power is required. BEMT results very
well correspond to experimental data. FLUENT results
somewhat underestimate the rotor performance although
the character of the relation CP-CT is accurately captured.

Apart from the computational grid appearance, flight


condition also dictates the numerical approach. Isolated
rotor in hover and vertical flight can successfully be
simulated by steady flow in a rotating frame of reference
60

OTEH2016

COMPUTATIONALANALYSISOFHELICOPTERROTORINGROUNDEFFECT

This small discrepancy can, at least partially, be explained


by the existence of wind tunnel walls (not present in
numerical model). Another explanation may be the
numerical set-up, i.e. mesh density, turbulence model, the
use of the steady frame of reference approach, etc.

comparison, computed plots are consistent with results


obtained by other authors [5, 12]. Greater pressure
difference resulting in increased thrust is evident for
z/d = 0.25 at inner parts of the blade. Near the blade tip,
for smaller z the velocity increases and pressure
decreases.
Behavior of the wake from the hovering rotor in ground
effect can be presented by streamlines, fig. 9. Slipstream
expansion near the surface is obvious.

Figure 6. CP = f(CT) at z/d = 0.25


Numerical results obtained in FLUENT can also be
validated against a known relation marked as eq. 3, fig. 7.
A plot of the thrust ratio in hover versus relative height
from the ground has been drawn for different geometries
(i.e. different collective pitch angles ). Although small
deviations exist (in particular for higher collective pitch
angles), it can be concluded that the trend of the change
has been successfully captured. The ground effect is
particularly important for z/R < 1, i.e. z/d < 0.5.

Figure 9. Streamlines from the hovering rotor for one


collective angle at z/d = 0.83 and z/d = 0.25
Flow field is affected by the vicinity of the ground, both
slipstream and induced velocities change, resulting in
altered power and thrust coefficients. These changes can
also be illustrated by vorticity fields, fig. 10.

Figure 7. TIGE/TOGE = f(z/R) for different geometries


An illustrative comparison of fluid flows at two different
distances from the ground can be made by comparing
pressure coefficients Cp along two spanwise locations on
the blade, r/R = 0.775 and r/R = 0.945, for a single
geometry (collective angle), fig. 8.

Figure 10. Vorticity contours in [s-1] in midplane for one


collective angle at z/d = 0.83 and z/d = 0.25

4.2. Forward flight


The ground effect on rotor performance in forward flight
is also important. However, the flow field around the
rotor gets even more complicated [1]. Flow characteristics
also greatly depend on the forward speed, i.e. advance
ratio , and for the same geometry (collective angle)
thrust coefficient increases in forward flight when
compared to hover. For that reason, forward flight
simulations were performed on 3 different model
geometries, for two distances from the ground z/d = 0.83
and z/d = 0.25, and a single advance ratio = 0.1.

Figure 8. Chordwise Cp distributions at 2 cross sections


for one collective angle at z/d = 0.83 (-) and z/d = 0.25 (--)

In order to accurately simulate forward flight condition, it


was necessary to use sliding mesh approach. Pressure farfield boundary conditions defining values of pressure,

Full line denotes z/d = 0.83 while dashed line refers to


z/d = 0.25. Although no experimental data is available for
61

COMPUTATIONALANALYSISOFHELICOPTERROTORINGROUNDEFFECT

velocity and turbulence quantities were assigned to outer


domain surfaces. Time step corresponds to an angular
increment of 5. A great number of rotations (around 10)
was necessary for attaining quasi-convergence of thrust
and power coefficients.

OTEH2016
Blade sectional pressure coefficients at the radial
positions x/R = 0.775 and x/R = 0.945 during one
revolution with the angular increment of = 60 are
presented in figs. 13 and 14 respectively. Full line denotes
z/d = 0.83 and dashed line refers to z/d = 0.25. Again,
computed results are comparable the other published
results [6, 13]. Since the advance ratio is low, overall flow
variations are smaller at the inner part of the blade.

Results computed in ANSYS FLUENT, marked by


square symbols in figs. 11 and 12, were compared to
experimental data (where available) and MT results.
Again, power coefficients computed by URANS are
somewhat higher than those by MT, and both sets of
numerical results seem higher than experimental values.
However, the character of the relations seems to be well
captured although discrepancies increase with the increase
of advance ratio (i.e. for = 0.1). Ideally, for thorough
analysis a complete map of aerodynamic performances
should be generated (for various and ). However, since
the simulations require large amounts of time, at this stage
of the study, only individual results are presented (as
discrete points on the graphs). One color refers to a single
value of thrust coefficient.

Figure 13. Chordwise Cp distributions at x/R = 0.775

Figure 11. CP = f() at z/d = 0.83

Figure 12. CP = f() at z/d = 0.25


Although thrust coefficient are different, smaller amounts
of required power at the lower distance from the ground
(z/d = 0.25) for low advance ratios are evident (i.e. for the
same CP much higher CT can be achieved). Unfortunately,
no experimental data is available for the employed model
of the helicopter rotor at low advance ratios.

Figure 14. Chordwise Cp distributions at x/R = 0.945


Main differences in the two ground distances can
primarily be seen at the advancing side of the rotor. At the
retreating side, pressure distributions seem quite similar.
62

OTEH2016

COMPUTATIONALANALYSISOFHELICOPTERROTORINGROUNDEFFECT

Flow structures in slow forward flight can be illustrated


by streamlines, fig. 15. Region of flow recirculation
formed upstream of the rotor at z/d = 0.25 is noticeable.
Employed CFD solver seems able to reproduce the
transient flow and loading of the blade.

References
[1] Leishman,J.G.:
Principles
of
Helicopter
Aerodynamics, 2nd ed., Cambridge University Press,
New York, 2006.
[2] Yeager,W.T. Jr.: Noonan, K. W., Singleton, J. D.,
Wilbur, M. L., Mirick, P. H., Performance and
Vibratory Loads Data from a Wind-Tunnel Test of a
Model Helicopter Main-Rotor Blade with a PaddleType Tip, NASA TM 4754, Hampton, Virginia,
1997.
[3] Pomin,H., Altmikus,A., Buchtala,B., Wagner,S.:
Rotary Wing Aerodynamics and Aeroelasticity, in
High performance Computing in Science and
Engineering
2000,
Springer-Verlag
Berlin
Heidelberg, 2001.
[4] Beaumier,P., Bousquet,J.-M..: Applied CFD for
analyzing aerodynamic flows around helicopters,
24th International Congress of the Aeronautical
Sciences, Yokohama, Japan, 2004.
[5] Barakos,G., Steijl,R., Badcock,K., Brocklehurst,A.,
Development of CFD capability for full helicopter
analysis, 31st European Rotorcraft Forum, Florence,
Italy, 2005.
[6] Antoniadis,A.F., Drikakis,D., Zhong,B., Barakos,G.,
Steijl,R, Biava,M. et al.: Assessment of CFD methods
against experimental flow measurements for
helicopter
flows,
Aerospace
Science
and
Technology, 19 (2012) 86-100.
[7] Cheeseman,I.C., Bennett,W.E.: The Effect of the
Ground on a Helicopter Rotor in Forward Flight, R.
& M. No. 3021, London, 1957.
[8] Ganesh,B.: Unsteady Aerodynamics of Rotorcraft at
low Advance Ratios in Ground Effect, Ph.D. thesis,
Georgia Institute of Technology, 2006.
[9] Pulla,D.P.: A study of helicopter aerodynamics in
ground effect, Ph.D. thesis, The Ohio State
University, 2006.
[10] Bingham,G.J., Noonan,K.W.: Two-Dimensional
Aerodynamic Characteristics of Three Rotorcraft
Airfoils at Mach Numbers from 0.35 to 0.90, NASA
TP 2000, Hampton, Virginia, 1982.
[11] Noonan,K.W.: Aerodynamic Characteristics of Two
Rotorcraft Airfoils Designed for Application to the
Inboard Region of a Main Rotor Blade, NASA TP
3009, Hampton, Virginia, 1990.
[12] Jarkowski,M.,
Woodgate,M.A.,
Barakos,G.N.,
Rokicki,J.: Towards consistent hybrid overset mesh
methods for rotorcraft CFD, Int. J. Numer. Meth.
Fluids, 74 (2014) 543-576.
[13] Biava,M., Khier,W., Vigevano,L.: CFD prediction of
air flow past a full helicopter configuration,
Aerospace Science and Technology, 19 (2012) 3-18.

Figure 15. Streamlines from the rotor in forward flight


for one collective angle at z/d = 0.83 and z/d = 0.25

5. CONCLUSION
Aerodynamic performances of an isolated model
helicopter main rotor in ground effect (at two distances
from the ground) obtained by several different analytical
and numerical approaches, ranging from fast approximate
to high-fidelity detailed solutions, were compared. For
most cases, satisfactory outcome (agreement with
experimental data) was accomplished given the fact that
relatively limited success has been achieved in correctly
predicting rotor performance in ground effect when
compared to experimental results [1]. This is due to the
problem complexity and strongly viscous nature of the
rotor IGE problem.
None of the employed numerical models can fully capture
the complexity of the flow. However, through their
combination and comparison with available experimental
data many useful pieces of information can be extracted.
Presented results have contemporary importance and
enable the development of a more efficient rotor design.
They also provide insight into complex flow fields around
a representative rotor in hover and forward flight (two
quite different, but equally important flight regimes).
Although additional work is necessary, it is possible to
use presented numerical set-ups to assess possible
increase of aerodynamic performances in ground effect.

ACKNOWLEDGEMENT
The paper is a contribution to the research TR 35035
funded by the Ministry of Education, Science and
Technological Development of the Republic of Serbia.

63

SIMULATION OF ROLL AUTOPILOT OF A MISSILE WITH


INTERCEPTORS
MILAN IGNJATOVI
Military Technical Institute, Belgrade, milan.ignjatovic@hotmail.rs
MILO PAVI
Military Technical Institute, Belgrade, cnn@beotel.rs
SLOBODAN MANDI
Military Technical Institute, Belgrade, msmanda@open.telekom.rs
BOJAN PAVKOVI
Military Technical Institute, Belgrade, bjnpav@gmail.com
NATAA VLAHOVI
PhD studies: School of Electrical Engineering, University of Belgrade,
Work place: Military Technical Institute, Belgrade, natasha.kljajic@yahoo.com

Abstract: In this paper a nonlinear simulation of roll autopilot of a missile using interceptors as actuators is presented.
Transfer function of a missile roll velocity used in this paper is a first order transfer function. Autopilot design was done
without nonlinear block. Simulation is developed using computer software. Results of a simulation, with several different
values of changing parameters are given.
Keywords: simulation, missile, roll, autopilot, interceptors.

1. INTRODUCTION

GA =

Although most guided missiles use fins for stabilizing and


controlling missile flight, some use interceptors.
Interceptors are surfaces intercepting the air flow. Their
actuators are electro-magnets causing their movement.

K
T s + 1

(1)

where K (K FI ) = 14 , and T (TFI ) = 0.32 .


Transfer function of interceptors is given with steady state
value equals unity:

In this paper we have developed simulation for roll autopilot


of an air to land guided missile which has interceptors.

GAKT =

Missile has four fixed wings. There are six interceptors


mounted on the rear side of the wings. They are all paired,
so there are three pairs. One pair of interceptors is used for
roll stabilization and is mounted on one pair of wings. Other
two pairs are mounted on the other pair of wings and serve
for yaw and pitch control. Surfaces of roll pair are moved in
opposite direction while surfaces of yaw and pitch pairs are
moved in the same direction.

1
Ta s + 1

(2)

Where Ta = 0.002 .
NL_Func block is where the command signal going to
actuators (eta) is calculated from command signal
(zeta). This calculation is given in Section 3.

dis (zeta_dis) is disturbance signal equivalent to surface

Simulation is developed using Matlab and Simulink


software package.

deflection due to fault in construction.


K and GR are parameters of roll autopilot. They are
determined in design procedure given in [1], which assumes
ailerons as control surfaces. Values of determined
parameters are equal to 5.7 and 0.5 respectively

2. SIMULATION MODEL
Simulation model is given in Picture 1. Roll velocity
transfer function is given as in [1] as a first order:

64

OTEH2016

SIMULATIONOFROLLAUTOPILOTOFAMISSILEWITHINTERCEPTORS

Picture 1. Simulation model

3. COMMAND SIGNAL CALCULATION

t c = f ( , Ts , t)

Interceptors can have only two steady state values, 1 and -1.
It means that interceptor is either all the way on one side of
the wing (position 1) or all the way on the other side
(position -1).

(4)

Command time is limited, so that it cannot be larger than


command sample time, and with that command signal
effect is practically limited from 0 to 1:

So, in order to control the missile with interceptors, we must


change the time during which interceptors are in position 1
and in position -1.

t + Ts , 0 < < 1
tc = t + Ts , 1
t
, 0

Here, the internal structure of NL_Func block from Picture


1. is explained. NL_Func block has as its inputs command
signal , simulation time t and command sample time Ts .
Its output is command signal . Time_past input and output
is the same simulation global variable which is used to
remember the time of last command calculation. This form
of global variable manipulation is done because of software
specifics.

(5)

Basically what we are doing is modulating command


sample period with percentage of period determined from
command signal thus linearly transporting signal to
command signal going to actuators , from 0 to 1 (with
saturation at 0 and 1) to -1 and 1.

Command signal is calculated periodically. This period


should not be too small, because we are restricted with
actuator's time constant, which is as we said 2 [ms], and
with time needed for forming aerodynamic forces due to
interceptors position change.
This period is called command sample time ( Ts ) . In Section 4.
it is shown how different values of Ts affect the output value.
On Picture 2. one command sampling period is shown.
Command_time input and output is the same global variable
which is used to remember the last calculated command
time tc value.
We can see that before time reaches command time tc ,
command signal has value 1, and after this moment value
-1. Here, it is given mathematically:

1, t < t c
1, t > t c

Picture 2. tc dependence

(3)

4. SIMULATION RESULTS
In this Section we presented results for three different values
of command sample time.

Command time ( tc ) is calculated as a function of command


signal ( ), command sample period ( Ts ) and time ( t ):

On Picture 3, we can see step responses to /4 step input for


65

OTEH2016

SIMULATIONOFROLLAUTOPILOTOFAMISSILEWITHINTERCEPTORS

command sample time values of Ts = 5 [ms] , Ts = 15 [ms]


and Ts = 30 [ms] .

Picture 5. Ts = 15 [ms]
On Picture 5. we see the change of and a for
Ts = 15 [ms] .

Picture 3. Step responses


Step responses change with changing command sample
time.

We can see that value of is beginning to fluctuate when it


reaches steady state value because G A output is starting to
react to change in a . It is not expected for real system to
behave like this.

With smaller values of command sample time transient


process is faster, response has less overshoot and larger
static error.
With larger values of command, sample time delay is added
to response, overshoot is larger and static error is smaller.

Picture 6. Ts = 30 [ms]
On Picture 5. we see the change of and a for
Ts = 30 [ms] .

Picture 4. Ts = 5 [ms]
In Picture 4. we can see the change in command signal

It can be seen that for this high value of Ts , interceptors are


able to achieve the given command. and fluctuation is
increased due to added delay to the system.

and interceptor movement a for Ts = 5 [ms] . We have


said that command signal is practically limited from 0 to
1. Here, it is represented as it is calculated, without
limitations.

Shorter command sample time has better transient process


and larger static error, while larger command sample time
has worse transient process and smaller static error.

Up to approximately 0.31 [s] a is not modulated


because time constant of interceptors is too small and
interceptor cannot achieve the demanded change.

5. CONCLUSION

When response reaches settle time we can see that command


signal has value of 0.5, which as we said, corresponds to
zero command . This means that for half of the command
sample time interceptors will be in position 1 and for the
other half in opposite position.

In this paper, one way of simulating roll autopilot of a


missile using interceptors is presented. We have shown
simulation results for three different values of command
sample time. Changing command sample time has influence
on step response of the roll autopilot.

66

OTEH2016

SIMULATIONOFROLLAUTOPILOTOFAMISSILEWITHINTERCEPTORS

[2] Garnell,P.: Guided Weapon Control System, Second


Edition, Pergamon Press, New York, 1980.
[3] Palm,W.III: System Dynamics, McGraw-Hill, New
York, 2014.

References
[1] Program APD Theoretical Manual, Military
Technical Institute, Belgrade, 2003.

67

DESIGN OF THE MAIN PIVOT ON THE FORCED OSCILLATION


APPARATUS FOR THE WIND TUNNEL MEASUREMENTS
MARIJA SAMARDI
Military Technical Institute, Belgrade, majasam@ptt.rs
DRAGAN MARINKOVSKI
Military Technical Institute, Belgrade, marinkovskid@ikomline.net
DUAN URI
Military Technical Institute, Belgrade, dusan.curcic@vti.vs.rs
ZORAN RAJI
Military Technical Institute, Belgrade, zoran.rajic@vti.vs.rs
ABDELWAHID BOUTEMEDJET
Military Academy, Belgrade, abdelwahed1954@gmail.com

Abstract: The main pivot on the forced oscillation apparatus for dynamic measurements in the T-38 wind tunnel is
described in this paper. Design of such element is complicated by restricted space inside the wind tunnel models. The
pivot of the T-38 forced oscillation apparatus is formed from a pair of symmetrical cross-flexures. Two types of the
cross-flexures are presented: cross-flexures with uniform cross-section of the strips and cross-flexures with variable
cross-section of the strips. Stress analysis of the cross-flexures showed that strips with variable cross-section much
better matched strict requirements of the dynamic wind tunnel measurements.
Keywords: flexure pivot, oscillations, wind tunnel, dynamic stability derivatives.
A special kind of flexure pivot is cross-flexure pivot,
Picture 2. It has a bi-symmetrical geometry and contains
two leaf springs of equal dimensions crossing at their
midpoints [1-4]. These pivots permit a high rotational
accuracy to obtain via compact, reliable and maintenancefree design with limited production costs. The crossflexure pivots are superior to conventional joint in
controlling an oscillatory motion. They are characterized
by high compliance with respect to the in-plane rotational
degree of freedom and high stiffness in other, secondary,
degrees of freedom. These characteristics make them very
useful in dynamic wind tunnel experiments with forced
oscillation motion [5,6].

1. INTRODUCTION
The idea of supporting the moving parts of sensitive
apparatuses on thin strips of metal or other elastic
material rather than on other types of pivots and bearings
is not new. The simplest and most common type of
flexure pivot consists of a tin metal strip which is free to
bend. Frequently two of these, one at right angles to the
other, are machined out of a rod, Picture 1. This provides
ball and socket action in roads subjected to tension or
compression and having only a negligible amount of
moment.

Picture 1. Flexure pivot

Picture 2. Cross-flexure pivot


68

OTEH2016

DESIGNOFTHEMAINPIVOTONTHEFORCEDOSCILLATIONAPPARATUSFORTHEWINDTUNNELMEASUREMENTS

strips were selected in design of this elastic element.

2. DYNAMIC EXPERIMENT IN THE T-38


WIND TUNNEL

Performance parameters of the apparatus for the pitching


experiments, according to the expected aerodynamic
loads in the T-38 wind tunnel dynamic experiment are:
- model oscillation amplitude: 0.25 - 1.5
- model oscillation frequency: 1-15 Hz
- maximum normal force Rm =1800 N
- maximum axial force: Rt = 5600 N
- maximum side force: Rs = 3000 N.

The main task of the dynamic experiment in wind tunnels


is to obtain model-scale dynamic stability information of
the aircraft at realistic Reynolds and Mach numbers. The
forced oscillation techniques are the most often used for
the dynamic experiments [7]. The apparatus for the
dynamic measurements in the T-38 wind tunnel is a fullmodel forced oscillation apparatus with the primary
angular oscillation around the wind tunnel model
transversal axis. The wind tunnel model is forced to
oscillate at constant amplitude. The apparatus is
distinguished by the capability to measure aerodynamic
reaction in the primary and secondary degrees of freedom.
The front part of the apparatus is shown in Picture 3.

3. THE CROSS-FLEXURE PIVOT


In the first design concept of the pivot the cross-flexures
were considered as rectangular strips of equal width and
thickness along the strips. The first design of the pivot
and load on the cross-flexure pivot in the horizontal plane
is shown in Picture 4, where Rs is aerodynamic side force,
Ms is bending moment, c.p. is center of pressure, e is
distance of the internal flexure strip axis from longitudinal
axis of the apparatus, l is length of the each strip, f is
distance of the outside flexure strip axis from longitudinal
axis of the apparatus, c is distance between two blocks of
the cross-flexures xp is distance from the centre of
pressure to the centre of the cross-section.
The bending moment at the centre of the inner surface of
the mowing block is:

Picture 3. Apparatus for the T-38 wind tunnel dynamic


measurement [8]

c
M S = Rs ( x p )
2

The structural rigidity is a very important requirement in


any forced oscillation apparatus. The elastic element has
to be designed in such way that permits the primary
motion of a model, but its high stiffness in other degrees
of freedom is necessary to withstand the aerodynamic
loads with negligible deflection. This relation between
relatively high compliance in the primary degrees of
freedom and high stiffness in the secondary degrees of
freedom is especially important in the measurement of the
cross and cross-coupling derivatives [9]. Because of all
these requirements the cross-flexure pivot is chosen for
the main pivot on the T-38 wind tunnel forced oscillation
apparatus. The special shape of the individual flexure

(1)

The bending moments at the end of strips, MA and MB,


are:
MA = MB =

Rs l
8

(2)

The total direct forces in the one pair of strips in


horizontal plane YA and YB, are:
YA = YB =

Picture 4. Aerodynamic load in horizontal plane

69

Rs l
Ms
+
4 (e + f )
2 (e + f )

(3)

DESIGNOFTHEMAINPIVOTONTHEFORCEDOSCILLATIONAPPARATUSFORTHEWINDTUNNELMEASUREMENTS

OTEH2016

Maximum bending stress in strips in horizontal plane, A,


is:
A =

(4)

YA M A
+
Ah wh

where Ah is cross-section area of strips and wh is section


modulus of strips.
Table 1 lists values of the total forces; bending moments
and bending stress for one pair of strips in horizontal
plane obtained according Equation (4).
Table 1. Results of design analysis in horizontal plane
One pair
strips in
horizontal
plane

Ms
[Nm]

Y A= Y B
[N]

MA=MB
[Nm]

[N/mm2]

85.5

1943.3

21.75

789.8
Picture 6. Maximum stress in the strips in horizontal
plane

A load applied in the stress analysis done by NASTRAN


NX V9 is shown in Picture 5. Yawing moment at the
centre of the cross-flexure pivot, Ms = RS xp, is generated
by side force RS. Side force and yawing moment at the
centre of the cross-flexure pivot were simulated by side
force which acted at the front surface of the mowing
block of the pivot.

The stress analysis showed that the aerodynamic loads


capability may be improved by designing with variable
cross-section of the strips, Picture 7. The largest thickness
in the centre and the largest width at the end of the strips
were chosen for achieving the greatest loads-carrying
capability for a different allowable stress and restricted
space inside the wind tunnel models. Both the elastic
elements, with uniform and variable cross-section of the
strips, are equal in general dimensions: distance between
moving and immobile block, lent of the flexures,
distances of the internal and outside flexure strips axis
from longitudinal axis of the apparatus.

Picture 7. Cross-flexure pivot with the variable crosssection strips


In the stress analysis of the strips with the variable crosssection load was applied in the same way as in the stress
analysis of the strips with uniform cross-section. For the
simulated loads in the horizontal plane, maximum normal
stress, 1, is obtained closely at the centre of strips and
stress value is more than two times lower than in strips
with uniform cross-section, Picture 8.

Picture 5. Load applied to the cross-flexure in horizontal


plane

Increasing stiffness of the elastic element in the horizontal


plane has not led to noticeable increase in stiffness in the
primary degree of freedom [10].

For the simulated load maximum bending stress is


obtained at the end of strips, Picture 6.

70

DESIGNOFTHEMAINPIVOTONTHEFORCEDOSCILLATIONAPPARATUSFORTHEWINDTUNNELMEASUREMENTS

OTEH2016

sensors or can be connected in the one sensor. Basic


characteristics of the primary oscillatory motion sensor
are shown in Table 2, where FS is sensor full scale.
The primary oscillatory motion sensor enables good
signal of the model primary motion. This is very
important in the data reduction procedure. In most cases
signals from the sensors of the secondary oscillations as
well as signals from the excitation moment sensors are
seriously contaminated by the noise generated mainly by
flow unsteadiness. As a nose level can be several times
higher than that of a desired signal, it is generally very
hard task to extract them adequately. Knowledge that the
desired signals from these sensors for excitation moment
and secondary oscillations are coherent with the primary
motion permits the use of cross-correlation technique.
Cross-correlation technique is especially suited to
applications where a clean reference signal coherent with
the one that needs to be extracted from the noise is
available. The signal from the primary oscillatory motion
sensor is noise-free. The amplitude of this signal is
determined by applying auto-correlation functions [11].
The primary oscillatory motion signal from the sensor
realized on the cross-flexure pivot is used as reference
signal in order to determine the secondary oscillations and
amplitude of the excitation moment using the crosscorrelation functions.

Picture 8. Maximum stress in the strips with variable


cross-section in horizontal plane

3.1. The primary oscillatory motion sensor


In the T-38 forced oscillation experiments the amplitude
of the excitation moment, amplitude of the model angular
oscillatory motion and phase shift between these
quantities have to be measured. The amplitudes of the
excitation moments are measured by the strain gauge
balance.

5. CONCLUSION

The amplitudes of the wind tunnel model angular


oscillatory motion are measured with the primary
oscillatory motion sensor located on the cross-flexure
pivot.

One of the main tasks in design of the cross-flexure pivot


on the T-38 forced oscillation apparatus is to achieve the
greatest possible stiffness in horizontal plane and side
force capability for a given overall dimension of the
elastic element. These dimensions are specified by
available space within the wind tunnel models for
dynamic experiments, i.e. by dimensions of the dynamic
balance and actuator arm of the apparatus. At the same
time, the pivot should provide large capability of the
aerodynamic normal force which is dominant component
of the aerodynamic load in experiments. Since the crossflexure pivot enables primary oscillatory motion of a
model this large capability of the normal force should not
diminish the high compliance in the primary (pitch)
degree of freedom. Two different shapes of the flexure
strips are considered: the first analysis was done for
uniform cross-section along strips and the second analysis
was done for strips with larges thickness in the centre and
the largest width at the ends.

Picture 9. Primary oscillatory motion sensor

Table 3. The results of the analyses


Maximum bending stresses in strips [N/mm2]
NASTRAN NX V9 results
Vertical
Horizontal
Strips with uniform
plane
plane
cross-section
580
790
Vertical
Horizontal
Strip with variable crossplane
plane
section
660
350

Table 2. The primary oscillatory motion sensor


characteristics
Sensor measuring
Maximum error
range
[% FS]
[]
1.5
0.25

Hysteresis
[% FS]
0.15

There are two primary oscillatory motion sensors on each


side of the cross-flexure pivot, Picture 9. Measuring
bridges are formed from 350 foil-type strain gauges
(Vishay Micro Measurements TK-06-S075P-350/DP).
These measuring bridges can be used as individual

Results of the cross-flexure stress analyses are shown in


Table 3 [10]. For the same given normal force, the values
of maximum bending stress in vertical plane for the strips
71

DESIGNOFTHEMAINPIVOTONTHEFORCEDOSCILLATIONAPPARATUSFORTHEWINDTUNNELMEASUREMENTS

with uniform and variable geometry in the pitch plane are


approximately equal. But, for the same given side force,
maximum bending stress in the strips with variable crosssection is more than two times lower. The variable crosssection of strips provided significant increase in stiffness
of the cross-flexure pivot in horizontal plane. It is very
important that this increase in stiffness has not led to
noticeable increase in stiffness in primary degree of
freedom. Such design of the elastic element provides
required amplitudes of a model oscillatory motion in the
T-38 wind tunnel experiments.

OTEH2016

[5] AEDC, Von Karman Gas Dynamics Facility (VKF),


http://wwwnimr.org/systems/images/vkf.html.
[6] National Research Council, Canada, http://www.nrccnrc.gc.ca/eng/solutions/facilities/wind_tunnel_index
.html
[7] Orlick-Rckeman,K.J.: Review of techniques for the
determination of dynamic stability parameters in
wind tunnels, AGARD-LS-114, The Advisory Group
for the Aerospace Research and Development,
NATO Research and Technology Organization,
Brussels, Belgium, 1981.
[8] Samardi,M., Anastasijevi,Z., Marinkovski,D.:
Comparison of the T-38 wind tunnel data obtained
by static and dynamic tests, in Proceedings of 6th
International Scientific Conference on Defensive
Technologies OTEH 2014, Belgrade, Serbia, (2014)
21-25.
[9] Samardi,M., Anastasijevi,Z., Marinkovski,D.,
Isakovi,J.: Measurement of the cross-coupling
derivatives due to pitching in the high Reynolds
number blowdown wind tunnel, in Proceedings of
29th Congress of the International Council of the
Aeronautical Sciences, ST. Petersburg, Russia
(2014) 1-8.
[10] Samardi,M., Marinkovski,D., Anastasijevi,Z.,
uri,Z., Raji,Z.: An elastic element of the forced
oscillation apparatus for dynamic wind tunel
measurements, Aerospace Science and Technology,
50 (2016) 272-280.
[11] Samardi,M.,
Isakovi,J.,
Milo,M.,
Anastasijevi,Z., Nauparac,B.D.: Measurement of
the direct damping derivative in roll of the two
calibration missile model, FME Transaction, 41
(2013) 189-194.

ACKNOWLEDGMENT
This study was supported by the Military Technical
Institute (VTI) and Ministry of Education, Science and
Technological Development of Serbia (project number
TR 36050).

References
[1] Young,W.E.: An investigation of the cross-spring
pivot, J. Appl. Mech., 11 (1944) A113-A120.
[2] Zelenka,S., Bona,DeF.: Analytical and experimental
characterization of high-precision flexural pivots
subjected to lateral loads, Precs. Eng., 26 (2002)
381-388.
[3] Hongzhe,Z., Shusheng,B.: Accuracy characteristics
of the generalization cross-spring pivot, Mech.
Mach. Theory, 45 (2010) 1434-1448.
[4] Hongzhe,Z., Shusheng,Y., Jingjun,Y., Guanghua,Z.:
The accurate modeling and performance analysis of
cross-spring pivot as a flexure module, in
Proceedings of ASME IDETC/CIE , Brooklyn, New
York, USA (2008) 1-7, ASME Paper No.
DETC2008-49694.

72

PRELIMINARY AERODYNAMIC COMPUTATION OF LONG


ENDURANCE UAV WING
ABDELWAHID BOUTEMEDJET
Military Academy, Belgrade, abdelwahed1954@gmail.com
MARIJA SAMARDI
Military Technical Institute, Belgrade, majasam@ptt.rs
ZORAN RAJI
Military Technical Institute, Belgrade, zoran.rajic@vti.vs.rs

Abstract: Wing preliminary aerodynaic computation of a low speed, long endurance unmanned aerial vehicles (UAV ), is
formulated as a single objective aerodynamic optimization.During this process,the wing planform parameters are optimized
with maximization of endurance .Four design variables from aerodynamics discipline namely taper ratio,aspect ratio,wing
loading and wing twist are taken in concediration.3D wing aerodynamic analysis is performed by the XFLR5 panels
method code.In the optimization process ,genetic algorithm is used to perform the optimal solution under defined
requirements of preliminary computation.
Keywords: UAV,Wing endurance,Optimization,Genetic algorithm,Panel method.

1. INTRODUCTION

2. PROBLEM FORMULATION

Aircraft design is a discipline of aeronautical engineering


different from the analytical disciplines such as
aerodynamics, structures, controls and propulsion. The
design process for an UAV is divided into three major
phases: the conceptual design, preliminary design and the
detailed design. The conceptual design in which the basic
question of configuration, size, weight and performance
based on the mission specifications and requirements are
presented in [1-3]. The preliminary design then advances
these concepts by individually designing and sizing of the
major components of an aircraft. The UAV aerodynamic
design starts with preliminary computation to satisfy the
performance requirements.

The main goal of the UAV design is to ensure long


endurance flying under specified requirements. The wing is
the main part that defines UAV performances. Only wing
design is considered for optimization study. In this
optimization process, the design aims at maximizing the
endurance, which is an aerodynamic aspect. Basing on this
aspect, objective function, design variables and constraints
are involved to formulate the optimization problem. The
airfoil SD7062 is chosen in this paper to get the optimized
shape.

2.1. Objective function


The choice of the objective function in any optimization
problem is dectated by the design requirements of the
aircraft. One may find objectives such as life cycle cost or
profit. Since these functions are usually very hard to connect
to the design variables via objective functions, lower level
related objectives may be used. For UAV, achieving
maximum endurance is required for the most aircrafts, the
fact that allows it to accomplish missions specified to
UAVs, also endurance is an ideal lower level optimization
objective for the aerodynamic designer. An initial estimate
of which can be obtained from the Breguet endurance
equation:

Many approaches have been used to realize the preliminary


computation. In this paper a numerical method for
optimizing aerodynamic has been explored. These
numerical methods are classified into one of three general
categories: inverse methods, gradient-based methods and
genetic algorithms (GA). The inverse method in
aerodynamic design seeks to determine the aerodynamic
shape for a specified surface pressure distribution. The
general idea associated with gradient methods consists of
the determination of the optimization's objective, the
parameterization of the geometry, and the computation of
the direction in the design space [4].

C 3/ 2
E= L
2 S 1 1
Wf
c CD
Wi

The GA is used to perform the optimization. Such method is


adequate for non smooth design spaces which may contain
many local optimums. It is used also for multipoint-point
design computations. One of the disadvantages of the GA is
expense where the number of function evaluations required
for such method is exceeded [5, 6].

(1)

where:
is propulsive efficiency, c is specific fuel consumption, CL
73

OTEH2016

PRELIMINARYAERODYNAMICCOMPUTATIONOFLONGENDURANCEUAVWING

is the lift coefficient, CD is the drag coefficient, Wi and Wf


are aircraft weights at the beginning and the end of the
cruise segment respectively.

h2 (t ) = VS ( t ) VS *

(4)

h3 (t ) = VM ( t ) VM *

(5)

The optimization of CL3/2/CD will contribute to optimizing


endurance, so the objective function is the maximization of
the rate CL3/2/CD [7].

where t = [ AR WS TR ]

2.2. Design variables

3. OPTIMIZATION PROCESS

The parameters related to the wing planform are considered


as the design variables. The aspect ratio (AR), wing Loading
(WS), taper Ratio (TR), twist angle () are chosen to reflect
the effect of aerodynamic discipline during the 3D wing
optimization process. The following figure shows the wing
shape:

3.1. Aerodynamic analysis


Vortex Lattice Method (VLM) is used for the wing analysis.
The wing is defined as a set of panels, that determined by
the following parameters: length, root, tip chord and
dihedral angle. The principle of a VLM is to model the
perturbation generated by the wing by a sum of vortices
distributed over the wings planform. The strength of each
vortex is calculated to meet the appropriate boundary
conditions, i.e. non penetration conditions on the surface of
the panels. The viscous drag is estimated by interpolation of
XFoil pregenerated polars, by the CL value resulting from
the linear VLM analysis. These code is well validated with
tunnel test experiments and other CFD codes [8].

Cr is the root chord, Ct is the tip chord, b is the wing span, S


( Cr + Ct ) b
2 , and W is the UAV
is the wing area, S = 2

weight.

The performance constraints are evaluated using a


programmed function as a simple computation for a
preliminary computation. The aerodynamic characteristics
obtained are used as a part of the inputs of this function to
evaluate the performance constraints.

3.2. Genetic algorithm

Picture 1. Design variables

Genetic algorithm is a search algorithm based on natural


selection and genetics, was first used by John Holland. The
Genetic algorithm presented in this paper utlizes three
operators; pass-through, crossover, and mutation [9]. In this
algorithm; gens, chromosomes and fitness represent
respectively: design variables, design condidates and
objective function. After the design space is defined, the
next step is to form an initial population of 50
chromosomes. The values of genes corresponding to each
chromosome are chosen randomly between fixed limits. For
each formed chromosome, fitness function is computed
using endurance function evaluation. Ranking process is
used after fitness computation, the most fit individual takes
the first ranks until the last rank corresponding to population
size. The pass-through operator is used to enable 10 % of
the fittest chromosomes to pass to the nexte generation
without any change.A simple random crosse over operator is
utilized. A determined number of chromosomes (20% of
population) undergo a modification using this operator;
where all selected chromosome genes are combined together
to forme new individuals. The other common operator is
mutation, in which a subset of genes are chosen randomly,
where their values are changed to produce 70 % of the new
individuals.

2.3. Constraints
There is a set of constraints usually associated with a wing
design. In this paper only aerodynamic constraints are
considered to formulate optimization problem. This
constraints are imposed on the performance parameters of
the UAV. The aerodynamic constraints are imposed on:
Rate of climb (ROC)
Stall speed (VS)
And maximum speed (VM)
This constraints arise from the tactical requirements, where
the UAV has to be hand launch at speed of 10 m/s,
accomplishing their missions at the height of 300 m and to
have maximum speed about 120 km/h because of the
propulstion system power limitations. This maximum speed
is delivered by an electrical motor.

2.4. Mathematical formulation


The 3D wing design process explored in this paper can be
formulated as a classical optimization system as following:

C 3/ 2
f (t ) = max L
CD

(2)

3.3. Fitness function evaluation

Constraints are:
h1 (t ) = ROC ( t ) ROC

Rapid computation is evaluated using panels method, The


XFLR5 flow solver is applied to compute aerodynamic
coefficients. The fitness function is evaluated using the lift

(3)

74

OTEH2016

PRELIMINARYAERODYNAMICCOMPUTATIONOFLONGENDURANCEUAVWING

and the drug coefficients. To make the process of


optimization more simple and rapid, the flow solver is
decoupled from the process, and the programmed genetic
algorithm is implemented with a trained artificial neural
networks data that gives an approximation of aerodynamic
coefficient. The data base used to train artificial neural
networks is obtained from a computation set, evaluated
using XFLR5.

Picture 2. Artificial Neural Network

3.4. Optimization
The optimization is performed using the encoded algorithm
given by the diagram shown in Picture 3. First the airfoil
SD7062 with thickness of 14 % is chosen to generate the
planform of the optimized wing. This airfoil design work
was conducted by David Wood and his research group. The
large relative thickness and the high-lift characteristics of
the SD7062 were what first triggered the selection of this
low Reynolds number airfoil for small UAV [10]. The wing
parameters are determined using the defined objective
function. Optimization model is formed essentially off ANN
block, performances evaluation block, and the GA
optimization block. The ANN is applied to determine lift
and drug aerodynamic coefficients, and stall speed. The
performances block is used to compute rate of climb and
maximum speed. The GA block is used to evaluate the
optimization process and determination criteria satisfied.

Picture 3 Optimization process

4. RESULTS AND DISCUSSION


The optimized shape of the desired wing is illustrated in the
following figure. The wing is given by the distribution of
pressure coefficient and the stream line. The flow structure
takes the shape of parallel stream lines behind the wing
planform, which is suitable for good fly performances where
the drug value is small because of the uniform flow. At the
two ends of the wing it is easy to observe the formation of
3D structures which are wingtip vortices formed because of
the pressure gradient between the two surfaces of the wing.

Picture 4. Wing planform

75

OTEH2016

PRELIMINARYAERODYNAMICCOMPUTATIONOFLONGENDURANCEUAVWING

The aerodynamic design variables of the obtained solution


using GA are shown in Table 1:
Table 1. Wing design variables
wing Loading (WS)
9.25
aspect ratio (AR)
10.5
Taper Ratio (TR)
1.57
twist angle ()
-2.01

The weight of the UAV is fixed to be about 7 Kg, basing on


the obtained parameters; the geometrical characteristics of
the wing are resumed in Table 2:
Table 2. Wing geometry variables
wing area (S)
0.756 m2
Wing span (b)
2.818 m
Root chord (Cr)
0.351 m
Tip chord (Ct)
0.292 m
Picture 6. ROC variations with speed

The performances of the evaluated wing after the process of


GA optimization are given in Table 3.
Table 3. Wing performances
Endurance factor ( CL3/2/CD)
Rate of climb (ROC)
Maximum speed (VM)
Stall speed (VS)

5. CONCLUSION
23.453
7.48
41 m/s
10 m/s

Results indicate that the genetic algorithm is easy to


implement and extremely reliable, being relatively
insensitive to design space. The presented paper resumes the
preliminary aerodynamic computation for a small UAV
under defined requirements using panels method to resolve
flow around the wing and GA to evaluate the optimized
wing shape.

The Picture 6 represents the variation of the endurance


coefficient with angle attack (alpha). The maximum value of
this coefficient takes place for an angle of attack about 5o. In
real cases, cruising angle is less than 5o, according to this
diagram, till an angle of attack of 2.5o, it is illustrated that
the endurance coefficient stills have relatively high values.

References
[1] Daniel P. Raymer, Aircraft Design: A Conceptual
Approach American, Institute of Aeronautics and
Astronautics, USA, 1989.
[2] Egbert Torenbeek, Synthesis of subsonic air plane
design, Delft University Press, The Netherlans,1976.
[3] Jan Roskam, Airplane design,The University of Kanas,
Lawrence, 2000.
[4] Terry L. Host, Thomas H.Pullin, "Aerodynamic Shape
OptimizationUsing A Real-Number-Encoded Genetic
Algorithm", 19th applied aerodynamics conference,
2001.
[5] Obayashi.S,
Tsukahara.T,
"Comparison
of
Optimization Algorithms for Aerodynamic Shape
Design", AIAAJ, 35(1997) 1413-1415.
[6] Bock, K-W, "Aerodynamic Design by Optimization,",
AGARD CP,1990.
[7] A.Sbester, A.I.J.Forrester, Aircraft aerodynamic
design, John Wiley & Sons, Ltd, 2015
[8] XFLR5 v6.02 Guidelines
http://www.xflr5.com/xflr5.htm
[9] Gendreau, Michel, Potvin, Jean-Yves, Handbook of
metaheuristics, springer, 2003.
[10] Christopher A.Lyon, Andy P.Broeren, Philippe
Gigure, Ashok Gopalarathnam, and Michael S.Selig
Summary of Low-Speed Airfoil Data, SoarTech
Publications Virginia Beach, Virginia, 1995.

Picture 5. Endurance variations with attack angle

The maximum rate of climb and maximum speed are


obtained from the diagram shown in Picture 6.

76

SECTION II

Aircraft

CHAIRMAN
Professor Dragoljub Vuji, PhD
Mirko Kozi, PhD

DEVELOPMENTS IN HEAD-UP DISPLAY TECHNOLOGY FOR BASIC


AND ADVANCED MILITARY TRAINING AIRCRAFT
ROBERT WILSEY FRAES
Director Total Reaction Ltd, Representing Esterline CMC Electronics Inc. Knighton, UK, treaction@btinternet.com

Abstract: Since the 1960s Head-Up Display development has revolutionized military flight training. A HUD allows the
pilot to fly head-out by reference to angle of attack and velocity vector as well as the primary flight display information.
Digital navigation information and weapon aiming solutions can also be also displayed. Todays trainer HUDs can
mimic the displays seen on front-line fighters permitting cost-effective Lead-in Fighter training.
Keywords: Fighter, Trainer, HUD, Avionics.
no magnification, projecting two collimated concentric
aiming rings. This allowed the pilot to move his head in
combat and still be able to see focused aiming rings. With
his eye position at five inches from the sight, the Field of
View (FOV) was 20 degrees. Collimation is the
projection of parallel light rays so that the focus is on
infinity. This allows the aiming pipper or ring to be
focused on infinity, matching the outside world and thus
reducing aiming errors due to parallax between the sight
and the target. Thus the pilot will see a focused aiming
solution when his eyes are focused on a distant aircraft. A
major advance in technology was the adoption by the
Germans of the Oigee reflector gun sight manufactured by
Optical Antal Oigee of Berlin, introduced during the last
months of the First World War and fitted to the Fokker
Dr.1 tri-plane and Albatros D.V bi-plane fighters. This
allowed the pilot greater freedom to move his head
without the visual obstruction of a tube, whilst the
illuminated pipper was projected onto a glass screen.
During the inter-war years the reflector sight was further
refined. After the outbreak of the Second World War the
gyroscopic or gyro gun sight (GGS) was developed in
1941-43 at RAE Farnborough, which fed information on
the rate of turn and skid into the sight to adjust the
position of an illuminated graticule of six diamonds which
could be set to correspond with the wingspan of the target
by rotating the throttle grip, known as stadiametric
ranging. This took much of the guesswork out of
deflection shooting. The Germans in particular produced
some advanced examples of gyro sights. The RAF
experimented with a radar projector system for their De
Havilland Mosquito night fighters fitted with the AI Mk
IX radar in 1944 which projected the radar image of the
target together with the gunsight graticule and an artificial
horizon onto the pilots windshield using a cathode ray
tube and lenses. This was probably one of the first true
aircraft HUDs as it displayed flight attitude information
together with target and aiming solutions but was never
adopted. The Blackburn NA.39 strike fighter, later named
the Buccaneer, was the first British military aircraft to
enter service with a HUD in 1958, manufactured by
Cintel, developed for very low-level anti-shipping strikes.

1. INTRODUCTION
th

th

Over the last 12 years the introduction of 4 and 5


Generation front-line fighters has resulted in a revolution
in basic and advanced military flying training. The cost of
operating front-line fighters has increased dramatically
and has led to the necessity of downloading flying
training to less expensive platforms such as Lead-InFighter Trainers (LIFT) and Advanced jet Trainers (AJT).
In turn this has led to a knock-on effect of downloading as
much as possible of advanced flying training to less
expensive Basic or Intermediate trainers. A key element
in this process has been the development and use of the
all-important Head-Up Display (HUD).

Picture 1. Example of typical F/A-18 HUD symbology


during a dogfight.PD.

2. DEVELOPMENT OF THE HUD


Probably the first use of an optical device by pilots was
the collimated gun sights used on First World War
biplanes. The Aldis sight was adopted by RFC pilots in
1916-1918 on their S.E.5a, Sopwith Camel and Spad
S.XIII bi-plane fighters. The French developed a similar
Chretian collimated sight. The Aldis sight consisted of a
hermetically sealed tube, resembling a telescope but with
79

DEVELOPMENTSINHEADUPDISPLAYTECHNOLOGYFORBASICANDADVANCEDMILITARYTRAININGAIRCRAFT

OTEH2016

Target identification improved.


Safer instrument approaches with head up
approaching decision or minimum descent height.
Easier scan whilst formation flying and air-to-air
refueling.
Safer low speed and hover VSTOL operations when
outside visual cues are critical.
Reduces the need for a Weapon System Officer
(WSO or back-seater) in ground attack and air
defence aircraft.

Typically, the HUD is controlled by an Up Front Control


Panel (UFCP) attached to the front lower HUD body
which, with a combination of soft buttons and LED or
LCD window displays, allows the moding of the HUD to
be changed together with the insertion of waypoints and
communication frequencies. The UFCP also controls the
brilliance of the HUD and UFCP displays with test
functions and day night lighting switches.

Picture 2. Aldis gunsight on WW1 Royal Aircraft


Factory S.E.5a fighter. The Vintage Aviator Ltd, New
Zealand

A quicker and more intuitive means of changing the HUD


display known as Master Moding (first developed on
the F-16) enables the pilot to switch from navigation
based display to air-to-ground to air-to-air modes by a
simple press of a Hands on Throttle and Stick (HOTAS)
button on the throttle grip. The displays in the HUD and
on the Multi-Function Displays (MFDs) can be preprogrammed by the pilot for each mode as he desires.
The power supply controls and software for the HUD
reside either in the HUD or run from a dedicated card
inside an open architecture Mission Computer. This
computer, if dedicated to driving the HUD only, is
sometimes referred to as the HUD Symbol Generator
(HSG).
The HUD with associated UFCP and HUD camera are
mounted on the dashboard of the panel on a specially
manufactured tray attached to primary aircraft structure.
This is carefully designed to align with the pilots design
eye position (DEP). It is imperative that the HUD cannot
move in its mounting (except for adjustment), that the
combiners clear the canopy bow for HUD fitting and
removal, that the combiners are clear of the windshield
and canopy in the event of the canopy flexing due to bird
strike, and that the front face of the UFCP is clear of the
ejection seat line. The resultant is a very accurate
placement of symbology, often 5 to 2 milliradians,
depending on its position in the FOV.

Picture 3. Ferranti MkIIC Gyro Gunsight in a Spitfire


MkIX towards the end of the Second World War. PD

3. THE HUD IN FRONT-LINE AIRCRAFT


The LTV A-7 Corsair was the first operational US aircraft
to be fitted with a HUD, the HUD Weapon Aiming
System (HUDWAS), produced by Marconi-Elliott. The
HUD, which has equipped most front-line aircraft since
the 1970s, allows pilots to tactically fly their aircraft,
whether it be whilst conducting air-to-ground close air
support missions or during air-to-air combat missions by
reference outside the aircraft with primary flight
information displayed in their field of view together with
computer generated weapon aiming solutions. However,
there are additional advantages that the HUD brings to the
pilot which include:
Increased tactical awareness with increased lookout.
Improved flight safety, especially at low level and in
poor visibility.
Simplified navigation with waypoints and targets
boxed in the pilots field of view and with steer
points and distance to go displayed.
Flight by reference to Angle of Attack.
Use of Velocity Vector (or energy) information.

Picture 4. Internal arrangement of a typical Refractive


HUD. Esterline CMC Electronics.
80

DEVELOPMENTSINHEADUPDISPLAYTECHNOLOGYFORBASICANDADVANCEDMILITARYTRAININGAIRCRAFT

OTEH2016

TFOV. The symbology is usually designed so that


primary flight information lies within this IFOV, but other
conformal symbols may move beyond the DEP IFOV.
This necessitates the pilot moving his head to see symbols
that have moved toward the edge of the TFOV. This
behavior is often described as like looking through a
hole in a fence. The hole is not large enough to see the
entire TFOV from the DEP. However, moving closer to
the HUD permits more to be seen, and moving laterally or
vertically permits a shift to see other parts of the display
as required. The projected imagery is traditionally green
phosphor P53 which has the attributes of lasting longer
than other greens and is also tuned to the peak wavelength
of human vision. The refractive HUD represents a lower
cost and a lighter solution compared to a holographic
HUD and is particularly suited to trainers, ground-attack
aircraft and 4th Generation fighters.

4. ADVANTAGES OF THE HUD FOR


MILITARY TRAINING AIRCRAFT
The software that draws the HUD symbology on the HUD
combiner can today be written so that the HUD will
mimic the symbology and functionality of various frontline aircraft. This, for instance will allow BAE Systems
HUD on the RAAF Hawk Mk127 to display similar
symbology to the RAAF F/A-18 Hornet with the same
button presses. The pilots HUD view is also recorded by
means of a digital mission recorder. A miniature HUD
camera, now usually placed forward of the HUD
combiners, records the pilots view of the outside world
whilst the HUD symbol generator or a mission computer
superimposes the HUD symbology onto the video
recording. Green phosphor symbology is difficult to film
and this computer generated superimposition results in a
clearer, more focused, recording and improved exposure.
This feature allows for timely and accurate post training
sortie debriefs from the instructor. The flying instructor in
the rear seat must also have access to what the student is
seeing through the HUD and this is achieved in one of
three ways:
A second rear seat HUD.
A rear seat conformal HUD Repeater.
A rear seat HUD repeater MFD menu page.
The format of the rear seat HUD display depends on the
design of the rear cockpit and to what degree it is angled
above the front ejection seat head-box. Both the
McDonnell Douglas TAV-8B Harrier and Eurofighter
Typhoon T1/T3 have separate rear seat HUDs with a
FOV largely above the front ejection seat head-box. LIFT
and advanced trainers are more usually equipped with a
dedicated HUD repeater monitor which is mounted
conformally above the rear dash enabling the instructor to
land the aircraft using the repeater and peripheral vision if
required. The third option is to display the HUD view in
an MFD menu page. This is a low cost solution but
requires the instructor to refer to a head-down display.

Picture 5. Esterline CMC Electronics SparrowHawk 25


degree Refractive HUD Esterline CMC Electronics

7. REFLECTIVE HUD TECHNOLOGY


The reflective HUD (also known as a diffractive HUD or
pupil relaying system) has a single, large diameter, oblong
combiner which gives a larger TFOV than a refractive
HUD. From the Design Eye Position, the IFOV of the
reflective HUD usually covers almost the entire TFOV.
Head movement is possible whilst retaining view of all
symbology within a defined eye motion-box or design
eye box, but when the head is moved outside this
imaginary box during high energy maneuvering all
symbology may disappear. This is not as disadvantageous
as it sounds and a pilot soon learns where to place his
head in order to regain the symbology. The relay lenses
required to adjust the light to be displayed on the single
combiner are complex and heavy. The combiner itself
acts as a collimating lens and an added advantage is that
the combiner can be placed further away from the pilot
than with a refractive HUD. The large TFOV provided by
the single combiner provides a more practical solution for
displaying raster images such as Forward Looking InfraRed (FLIR), Enhanced Vision System (EVS) or Synthetic
Vision on the HUD. The reflective HUD is relatively

5. HUD FIELD OF VIEW


For all types of HUD, the term Instantaneous Field of
View (IFOV) describes the overall size and shape of the
symbol space that can be seen from any specific head
position. The IFOV typically changes as the pilots head
position changes. The term Total Field of View
(TFOV) describes the fixed overall limits within which
the HUD can display symbols. The TFOV does not
change with head position.

6. REFRACTIVE HUD TECHNOLOGY


The traditional refractive HUD uses a Cathode Ray Tube
(CRT) and lenses to project collimated symbology onto
either one or two flat combiners. The use of a second
combiner increases the vertical dimension of the IFOV.
However, due to space constraints in the cockpit, the size
of the optics is limited and the IFOV from the cockpit
design eye position (DEP) is typically smaller than the
81

DEVELOPMENTSINHEADUPDISPLAYTECHNOLOGYFORBASICANDADVANCEDMILITARYTRAININGAIRCRAFT

heavy and expensive and more suited to 4.5 and 5th


Generation front-line fighters where a large IFOV is
required.

OTEH2016

but essential sorties. Neither the F-35 Joint Strike Fighter


(JSF) nor the F-22 Raptor have a two-seater training
variant and the current estimate of the cost per hour for
the F-35A is approximately US$42,000 (compared with
US$20,000 per hour for the F-16C) [8]. It is estimated
that 210 flying hours per year will be required to keep an
F-35 pilot fully combat ready [10]. There is therefore an
urgent requirement for Lead-in Fighter Trainers (LIFT) to
mimic 5th Generation Fast Jets. Much of operational
conversion training and Squadron continuation training
can then be downloaded to a suitably equipped LIFT. The
LIFT must be able to mimic the HUD, HMD symbology
and the wide touch-screen display technology of the 5th
generation front-line aircraft with fidelity.

8. THE HUD AS PART OF A DIGITAL


COCKPIT SOLUTION
The HUD and UFCP are closely integrated with the
HOTAS, weapon delivery, MFD selection and displays,
GPS navigation and sensors. The HUD is the key to the
integrated fighter and trainer cockpit which has two
additional levels of redundancy. Primary Flight
information is displayed in the HUD, on the PFD page of
the MFDs and typically on a stand-alone electronic
standby instrument system which runs off its own
independent power supply in the event of a total systems
failure. Fly-by-wire and care-free handling has made the
pure flying skills required to fly 5th Generation aircraft
less critical whilst systems management, information
processing and decision making under high G-loads have
become increasingly essential. Most of these skills can be
learned on a less capable and lower cost platform using
training software to mimic the symbology and systems
management of the front-line platform.

Virtual training, mission de-briefing tools and simulations


can accurately replicate expensive advanced sensors and
mission systems. For example the requirement for an
expensive and highly capable radar can be replaced for
training purposes by computer generated virtual radar
together with virtual defensive systems, including RWR,
chaff and flares. Related virtual HUD radar symbology
items such as a Target Designator box, locator line and
text boxes can be displayed using Virtual Training System
(VTS) software. The 5th Generation fighter such as the F22 is designed to engage multiple adversaries with beyond
visual range (BVR) AIM-120 AMRAAM missiles. Thus
it is wasteful in terms of both training time and money to
practice unrepresentative 1v2 air-to-air combat. It is also
prohibitively expensive to get 14 high-performance
aircraft airborne to act as targets. VTS will enable the
LIFT student to engage, for instance, 12 virtual
adversaries during a training mission at the cost of one
hours flight time in a LIFT. The combination of these
features means that valuable flying training hours can be
downloaded from a US$42,000 per hour front-line fighter
to a US$3,000 - 9,000 per hour LIFT/advanced jet trainer.
In response to these requirements Esterline CMC
Electronics has recently integrated a Digital HUD with a
20 x 8 inch Large Area Display (LAD) touch screen,
replicating the 5th Generation cockpit. The large area
display can show multiple windows and includes
synthetic vision. This, and other future LIFT glass
cockpits, will not only help solve the 5th Generation
training problem, but will allow front-line squadron pilots
to remain in current operational readiness without using
very expensive front-line aircraft.

Picture 6. BAE Systems 35 x 25 degree reflective HUD


on Eurofighter Typhoon. Eurofighter GmbH

9. THE FUTURE OF THE HUD


The most important recent change to the conventional
HUD has been the replacement of the CRT with a digital
light engine (DLE). CMCs digital SparrowHawk HUD
was unveiled at the Farnborough Airshow 2012 as part of
CMCs next generation cockpit (Cockpit-4000 NexGen).
It utilizes a Digital HUD, replacing a conventional CRT
with a digital light engine, eliminating the requirement for
a high voltage power supply to the CRT. The introduction
of the DLE will mean that the life-limited CRT
component will no longer have to be replaced, typically
every 2,000 hours, saving expensive down-time. In
addition to improved MTBF the symbology will also burn
brighter and not suffer from fade as did the CRT based
symbology. This in turn will improve the raster display
performance of the HUD.

10. THE HUD IN FLIGHT SIMULATORS


With todays emphasis on realistic simulation, a modern
full motion flight simulator can replicate the real aircraft
with fidelity. This includes the HUD when fitted to the
real aircraft, but there is a technical adjustment that has to
be made to the HUD optics since the outside world is
usually a computer-generated scene projected onto a
curved wrap-around screen. For simulator applications the
HUD symbology focus therefore has to be specially
tuned. Such focus tuning is effective with domes of 10
meters radius or greater. With the increasing importance
on providing compact visual systems in modern
simulators, it becomes difficult to refocus the HUD
without introducing undesirable display artificialities that

With the introduction of the 5th Generation aircraft, frontline jets will prove to be too expensive to fly on anything
82

DEVELOPMENTSINHEADUPDISPLAYTECHNOLOGYFORBASICANDADVANCEDMILITARYTRAININGAIRCRAFT

increase as the distance to the dome decreases. One solution


is to project the HUD symbology with the visual scene and
forgo use of real HUD symbol generation. This works but
there are lingering concerns as to whether or not training on a
HUD without the relationship between IFOV and TFOV
being present might introduce negative training elements that
must be overcome in the real aircraft.

OTEH2016

HMD which is useful for the accurate delivery of


boresight weapons and for instrument approaches. The
HUD/HMD combination also offers an effective backup
in the event of HMD failure, especially in the case of
night or poor weather recovery to a carrier. The cost and
complexity of HMDs will be beyond the budget of many
smaller nations. There is thus a continuing requirement
for HUD and it is unlikely that the HMD will replace a
HUD in training aircraft in the near future.

11. IMPACT OF HELMET MOUNTED


DISPLAYS

12. CONCLUSION

The introduction of Helmet Mounted Displays (HMD)


was first used operationally by the South African Air
Force on the Dassault Mirage F1-AZ and later the Atlas
Cheetah. Russia and Israel were quick to develop the
concept further and Elbits DASH was the first HMD to
see service in the West. Development of HMDs for 4th
and 5th Generation fighters is progressing although there
have been considerable technical challenges with issues
including:
Weight and ejection safety
Lag in displaying information
Jitter
Alignment of the pilots eye and HMD
Incorporating night vision into the HMD

In summary Head-Up Displays are now a core part of


military training aircraft avionics. Training the 5th
Generation fighter pilot on front-line aircraft will be
unaffordable and thus a new generation of Lead-in Fighter
Trainers will be required. They must be able to mimic the
handling, feel and mission systems of the front-line. As
much of the advanced training syllabus as possible should
flow down to basic training in order to make the best
economic use of valuable and dwindling assets. Integrated
glass cockpit technology, together with the use of flight
simulators, replaces expensive flying hours and increases
the effectiveness of training the modern combat pilot.
Despite the development of the HMD the HUD is likely
to continue to play a vital part in training for many years.

Advanced fighters such as the F/A-18E Super Hornet,


Eurofighter Typhoon, Dassault Rafale and the Lockheed
Martin F-22 Raptor rely on the HUD for primary flight
information whilst the HMD gives additional tactical
situational awareness and off-axis weapon cueing. The
Lockheed Martin F-35 Lightning II JSF will be the first
aircraft designed to rely solely on an HMD, the ESA
Visions Systems Helmet Mounted Display System
(HMDS), without a HUD. All other current HMD
equipped front-line fighters such as the Typhoon, with the
BAE Systems Head Equipment Assembly (HEA) Striker
HMD, continue to use a HUD. Currently a HUD will
generally produce more accurate symbology than an

Picture 8. Example of a modern LIFT Trainer Cockpit.


CMC Electronics new Cockpit-4000 NexGen, showing
28 degree digital HUD (top) and an early demonstration
20 x 7 inch Large Area Display touchscreen. Esterline
CMC Electronics.

Picture 7. Elbit Systems of America/VSI International


HMDS for F-35 JSF. PD
83

DEVELOPMENTSINHEADUPDISPLAYTECHNOLOGYFORBASICANDADVANCEDMILITARYTRAININGAIRCRAFT

OTEH2016

[7] Clarke, R Wallace: British Aircraft Armament:


Volume 2 RAF Guns and Gunsights from 1914 to the
present day Patrick Stephens 1994.
[8] Drew, James: F-35A cost and readiness data
improves in 2015 as fleet grows, Flightglobal 02
February 2016.
[9] Gunner, Jerry: Netherlands & the F-35, Air Forces
Monthly July 2014.
[10] HUDWAC Flight International, 14 November 1974.
[11] Newman, Richard L.: Head-Up Displays, Avebury
Aviation 1995.

References
[1] http://simhq.com/forum/ubbthreads.php/topics/32726
19/Re:_Aldis_gunsight
[2] Kopp, C.: The Modern Fighter Cockpit, Australian
Aviation & Defence Review March 1981.
[3] Wilsey, R.: Developments in Glass, Cockpit
Technology Front Line Defence Vol 10 no 1.
[4] Penney, S.: Honing the Hawk, Flight International
Magazine 25 Feb 2003.
[5] Wood, RB & Howells P.J.: Head-Up Displays, The
Avionics Handbook CRC Press 2001.
[6] Croft, John: Helmet Mounted Displays: Adding Night
Vision, Aviation Today 1 Sep 2006.

84

FLIGHT PERFORMANCE DETERMINATON OF THE PISTON ENGINE


AIRCRAFT SOVA, COMPUTER PROGRAM "SOVAPERF"
NEMANJA VELIMIROVI
YUGOIMPORT-SDPR, Belgrade, Serbia, velimirovicnemanja@yahoo.com
KOSTA VELIMIROVI
Military Technical Institute, Belgrade, Serbia, kolevelimirovic@yahoo.com

Abstract: The method for the estimation of the piston, armed or non, aircraft performances is presented in this paper.
"SOVAPERF" new computer program is used for the calculation of basic and special performances of the piston engine
aircraft. Nonlinear model total energy which served as the computer program is basis for performance calculation. The
computer program "SOVAPERF" consists of several modules: Module for determining the mass aircraft, Airport Module based on current meteorological conditions determines the data of the airport. This module requests data: frontal wind speed,
slop and quality of the runway, Atmosphere Module - data calculations are made for the altitude. It works for all conditions
from polar to tropical. Aerodynamics Module - loaded aerodynamic characteristics of clean, takeoff and landing
configurations. This module also presents data on the aerodynamic characteristics of the launcher under the wing and
weapons, Engine Module - provides information on the performance and fuel consumption in function of the regime and the
height of flight. Performance Module - computes the minimum and maximum speed, climb and ceiling. Turn Module turnaround data calculation. Takeoff and Landing Modules - presents all data related to the take-off and landing. The data
refers to the characteristic length, velocity, time and fuel consumption. Cruise Module - optimal parameters of cruising. Start,
Taxi and Combat Modules - compute fuel consumption and other data. Range Module - the most complex module of the
computer program "SOVAPERF". This module calls all the listed modules and determines the maximum length of the flight
profile. The method and its results are illustrated by the numerical example. SOVA armed, aircraft performances are
presented in numerical example. Used programming language is MATHCAD 14.
Keywords:. Aircraft, piston engine, performances, computer program.

1. INTRODUCTION
The production of the domestic aircraft SOVA (SDPR,
UTVA-Panevo) has started. SOVA (Picture 1) (Table 1) is
all-metal, "side by side" arrangement of four sets, low-wing,
fixed landing gear, single-engine, intended for initial training
and selection, sport flying - tourism and light ground-attack
aircraft. Power plant is Lycoming IO-390-A1 B6 210HP
maximum constant power (Picture 2). Aircraft SOVA can be
armed with guns, bombs and rockets. SOVA has excellent
flying characteristics. SOVA is perfect trainer, due to
exceptional features at low speed flying, and a strike plane of
good maneuverability, needed in combat conditionals [3, 4].
SOVA can be used for reconnaissance, patrolling and antiterrorism operations. Aircraft SOVA new electronic
equipment ("glass cockpit") (Picture 3) is equipped. A "glass
cockpit" is an aircraft cockpit that features electronic (digital)
instrument displays, typically LCD screens, rather than the
traditional style of analog dials and gauges. The "glass
cockpit" has become standard equipment in modern aircraft.
During the process of airplane modernization, design and
exploitation the great attention is focused on the
performance [11]. For aircraft performance calculations it
is necessary to have a "tool". "SOVAPERF" is new
"tool"-computer program for the calculation of piston
engine aircraft flight characteristics. This paper presents
aircraft SOVA performance calculation.

Picture 1. Aircraft SOVA (SDPR, UTVA-Panevo,


Serbia), exhibition Partner 2015

85

FLIGHTPERFORMANCEDETERMINATONOFTHEPISTONENGINEAIRCRAFTSOVA,

Table 1. Characteristics of the aircraft SOVA


Basic trainer: max. takeoff
1250kg
weight
(4 pilots, max. internal fuel)
Weight of empty aircraft,
750kg
equipped
Max. internal fuel
124 kg
Wing surface area
14.63m2
Propeler Hartzell,two1.93m
blade,all-metal, constant
speed,diameter
Power plant, max power, Air
Lycoming IO-390-A1
conditioning provisions,
B6, 210HP
electronic ignition

OTEH2016

4. Aerodynamics Module - loaded aerodynamic


characteristics of clean, takeoff and landing
configurations [5]. This module also presents data on
the aerodynamic characteristics of the launcher,
missiles, bombs and gun.
5. Engine Module - piston engine: provides information
on the performance and fuel consumption in function
of the regime and the height of flight [7].
6. Performance Module - computes the minimum and
maximum speed, climb and ceiling.
7. Turn Module - turnaround data calculation.
8. Takeoff and Landing Modules - presents all data
related to the take-off or landing. The data refers to the
characteristic length, velocity, time and fuel
consumption.
9. Cruise Module - optimal parameters of cruising.
10. Start, Taxi and Combat Modules - computes fuel
consumption and other data.
11. Planning module.
Range Module - the most complex module of the
computer program"SOVAPERF". This module calls all
the listed modules and determines the length and time of
the flight profile [3], [4].

Picture 2. Lycoming IO-390-A1 B6

Picture 3: Aircraft SOVA - glass cockpit, exhibition


Partner 2015

2. COMPUTER PROGRAM "SOVAPERF"


"SOVAPERF" computer program (Picture 4) is used for
the calculation of basic and special performances aircraft
equipped with piston power plant. It is new computer
program. Nonlinear model total energy which served as
the computer program was basis for performance
calculations [1], [2], [6], [8]. Plane is considered as a
material point[9]. Used programming language is
MATHCAD 14.
The program consists of several modules [12], [13]:
1. odule for determining the mass aircraft,
2. Airport Module - based on current meteorological
conditions determine the data of the airport. This
module requests data: frontal wind speed, slop and
quality of the runway.
3. Atmosphere Module - data calculations are made for
the altitude. It works for all conditions from polar to
tropical.

Picture 4. "SOVAPERF" computer program - flowchart

3. NUMERICAL EXAMPLE
Some aerodynamic characteristics, flight capability and
other performances of the aircraft SOVA (Picture 5)
(Weight 1279 kg, configuration 1pilot+2xBomb FAB-100
86

OTEH2016

FLIGHTPERFORMANCEDETERMINATONOFTHEPISTONENGINEAIRCRAFTSOVA,

M80, Picture 6, Table 2) are presented in numerical


example. Diagram, Picture 7, shows the coeficients
resistance and aerodynamic lift of configuration 2xBomb
FAB-100 M80. Picture 8 presents sea level and altitude
performance. Picture 9 is diagram ofminimum fuel flow
versus power. Picture 10 presents propeller efficiency
versus advance ratio for different power coefficient..
Speed altitude envelope (maximum speed, minimum
speed, minimum speed with flaps) is presented in Picture
11. Thrust available and thrust required, for different
heights and velocity at maximum engine rating is
presented (Picture 12). Specific excess power for diferant
heights and velocity at maximum engine rating are
presented in Picture 13. Distances of take-off are
presented in Picture 14. Combat flight profile (aircraft
configuration: 1pilot+2xBomb FAB-100 M80, reserve
fuel 10%) is shown in Picture 15 [10]. Geometric
characteristics of the flight profile are shown in the Table
3. Weather conditions and "altitude" airport (Standard
Atmosphere, STA) are presented in Table 4. Some
calculated data of optimal flight profile are presented in
Table 5 [9, 14].

Picture 8. Lycoming IO-390-A1 B6, Sea level and altitude


performance
120
100
fuelflowt

80
60
40
20
50

100

150

200

250

Brakhpt

Picture 5. Aircraft SOVA+2xBomb FAB-100 M80

Picture 9. Lycoming IO-390-A1 B6, min. fuel flow(lb/hr)


versus power (hp) - Power (80-210hp)

( Jtt , Cpt 1)
( Jtt , Cpt 2)

Picture 6. Bomb FAB-100 M80

( Jtt , Cpt 3)

Table 2. Bomb FAB-100 M80 specifications


Bomb FAB-100 M80
Diameter
230 mm
Length
1490 mm
Mass bomb
117 kg
Mass warhead
39 kg (TNT)

( Jtt , Cpt 4)
( Jtt , Cpt 5)0.6
( Jtt , Cpt 6)
( Jtt , Cpt 7)

1.5
Cz

0.8

( Jtt , Cpt 8)0.4

1
0.5
0

0.2

0.1

0.2

0.5

1.5

Jtt

Cxavn( Cz , Ikonf , Itert , borbter )

Picture 10. Variable pitch installed propeller (Hartzell


D=1.93 m), Efficiency (-) versus advance ratio Jtt(-) for
different power coefficient Cp

Picture 7. Aircraft SOVA Coeficients resistance Cxavn


and aerodynamic lift Cz, configuration 2x Bomb FAB100 M80, velocity V=50 m/s, height h=1000 m

87

OTEH2016

FLIGHTPERFORMANCEDETERMINATONOFTHEPISTONENGINEAIRCRAFTSOVA,

410

hwmaxbih 3103
hwmaxbih
hwmaxbih

210

110

0
100

150

200

250

SZ=583.m

Vmaxstacih , Vminstacih , Vminsletanjestacih

Picture 14. Take-off, configuration 1pilot+2xBomb FAB100 M80

Picture 11. Aircraft SOVA, True speed altitude envelope,


configuration 1pilot+2xBomb FAB-100 M80, height
h(0.0-4000m), velocity V (0.0-250.0km/h), maximum
engine rating
310

2.510

210

Spl=745.m

Rxav1( Vleta)
Rxav2( Vleta)
Rxav3( Vleta)
Rxav4( Vleta)
Telise1( Vleta)

Picture 15. Range, length of the flight profile


Table 3. Data flight profile
Height cruising 1 (5)
Height holding (6)
Height penetration 1 (8)
Combat (9)
Height penetration 2 (10)
Height cruising 2 (12)

Telise2( Vleta)
Telise3( Vleta)
Telise4( Vleta) 1.510

110

100

150

200

Table 4. Meteo data airport

250

Temperature, airport (C)


Air pressure, airport (Pa)
Wind head, airport (m/s)
"Altitude" airport STA (m)

Vt( Vleta)

Picture 12. Aircraft SOVA, Thrust available Telise (N)


and thrust required Rxav (N), for different heights
(0.,500.,1000.,1500. m) and velocity Vt (0.-250.0km/h),
maximum engine rating

w1 ( Vletasvs1)3
w3 ( Vletasvs3)2
w5 ( Vletasvs5)
1
40

50

15.0
1.013*105
0.0
5.096

Table 5. Calculated characteristics of the optimal flight


profile
Distanc.
Height (m) Time (min)
(km)
1
Start
5.
5.
2
Taxi
5.
5.
3
Takeoff
5.
0.464
0.745
4
Climb 1
5.-1500.
10.537
25.529
5
Cruising 1
1500.
75.059
195.45
6
Holding
1500.
15.0
7
Planning 1
1500.-300. 4.155
10.4
8
Penetr. 1
300.
14.417
50.0
9
Combat
300.
3.0
10
Penetr. 2
450.
11.736
40.0
11
Climb 2
450.-1300.
2.99
6.898
12
Cruising 2
1300.
87.495
222.7
13
Planning 2
1300.-5.
5.25
11.538
14
Landing
5.
0.426
0.63

1500m
1500 m
300 m
300 m
450 m
1300 m

60

Vletasvs1 , Vletasvs3 , Vletasvs5

Picture 13. Aircraft SOVA, Specific excess power


w(m/s), configuration 1pilot+2x Bomb FAB-100 M80 for
heights (0.,1000.,2000. m) and velocity Vletasvs (3560m/s), maximum engine rating

88

FLIGHTPERFORMANCEDETERMINATONOFTHEPISTONENGINEAIRCRAFTSOVA,

1
2
3

Start
Taxi
Takeoff

Climb 1

5 Cruising 1
6 Holding
7 Planning 1
8 Penetr. 1
9 Combat
10 Penetr. 2
11

Climb 2

12 Cruising 2
13 Planning 2
14 Landing

OTEH2016

maksimalnog taktikog radijusa naoruanog klipnoelisnog aviona, SYM-OP-IS-2010, Tara, Serbia


2010.
[5] Velimirovi,K.,
Velimirovi,N.:
Odreivanje
maksimalnog taktikog radijusa klipno-elisne
bespilotne letelice, program BELI ORAO, SYM-OPIS-2011, Tara, Serbia 2011.
[6] Bajovi,M.,
Velimirovi,K.,
Molovi,V.,
Velimirovi,N.: Analiza aerodinamkih koeficijenata
na osnovu aerotunelskih i letnih ispitivanja, OTEH
2009, Military Technical Institute, Belgrade , Serbia,
2009.
[7] Bajovi,M., Velimirovi,K., Molovi,V.: Estimacija
performansi klipno-elisnog aviona, OTEH 2007
Military Technical Institute, Belgrade, Serbia, 2007.
[8] Velimirovi,K., Velimirovi,N.: Tactical UAV
PEGASUS with in flight adjustable propeller,
COMPUTER PROGRAM WHITE EAGLE 2, OTEH
2010 Military Technical Institute, Belgrade, Serbia,
2010.
[9] Velimirovi,K., Velimirovi,N.: Flight performance
determination of the turboprop aircraft, Fourth
Serbian Congress on Theoretical and Applied
Mechanics, Vrnjaka Banja, 2013.
[10] Velimirovi,K.,
Velimirovi,N.:
Odreivanje
maksimalnog taktikog radijusa naoruanog turboelisnog aviona, program KOBACPERF SYM-OPIS-2014, Divibare, Serbia 2014.
[11] Velimirovi,K., Velimirovi,N.: Tactical UAV
PEGASUS as a platform to carry missiles, OTEH
2011 Military Technical Institute, Belgrade, Serbia,
2011.
[12] Ili,D., Devi,V., Velimirovi,K., Antoni,V.,
Milenkovi-Babi,M., Sari,Z.: Analysis of Lasta
aircraft improvement by integration of a turboprop
power plant, OTEH 2014 Military Technical
Institute, Belgrade, Serbia, 2014.
[13] Velimirovi,K.: Avion sa turboelisnom pogonskom
grupom: proraun performansi leta, monografija,
Military Technical Institute, Belgrade, Serbia, 2015.
[14] Velimirovi,K.: Taktika bespilotna letelica sa
klipno-elisnom pogonskom grupom: proraun
performansi leta, monografija, Military Technical
Institute, Belgrade, Serbia, 2013.
[15] Velimirovi,K., Velimirovi,N.: Flight performance
determination of the turbjet aircraft, Serbian
Congress on Theoretical and Applied Mechanics,
Aranelovac, Serbia, 2015.

Absolute
Instrm. Aircraft
Rotat.
manifold
Speed
per min.
mass
pressure
(km/h)
(min-1)
(kg)
(in Hg)
1279
21
2300
1276
20
2000
144.2
1275
28.4
2670
141.21275 28.3-23.5 2700
139.4
158.4
1266
21.5
2450
147.6
1232
20.4
2350
145.5
1225
15
1100
205.0
1225
27.3
2600
1213
26
2650
200.0
977
23.6
2650
135.7969
26.8-24.1 2700
131.7
162.0
967
18.85
2450
129.1
933
15
1100
129.2
932
16.0-15.0 1200

Reserve fuel (10%)= 12.4 kg,


Range = 281.2 km
Time=240.5 min

4. CONCLUSION
Program "SOVAPERF" based on total energy method is
used for piston engine aircraft flight characteristics
calculation. Program is very useful for: aircraft design,
writing pilot instructions and planning missions. Program
is fast enough and reliable. Accuracy of results depends
on the loaded engine and aerodynamic characteristics of
the aircraft, launcher, missiles, bombs and gun. Method
and its results are illustrated by numerical example for
piston engine SOVA aircraft in software MATHCAD 14.
Results of calculation performances SOVA aircraft are
presented in the paper. Calculated flight performances and
capabilities shows a quality of piston engine SOVA
aircraft.

References
[1] Renduli,Z.: Mehanika leta, Vojnoizdavaki i
Novinski Centar, Belgrade , Serbia. 1987.
[2] Smetana,F.: Flight vehicle performance and
aerodynamic, AIAA Wright-Patterson Air Force
Base, Ohio. 2003.
[3] Upravljanje avionom V-53, VTUP, Beograd 1991
[4] Velimirovi,K.,
Velimirovi,N.:
Odreivanje

89

POSSIBLE APPROACHES TO EVALUATION OF TRAINING AIRCRAFTS


USED IN FLIGHT SCREENING
SLAVIA VLAI
Military Academy, Belgrade, slavisavlacic@yahoo.com
FRANC HUDOMAL
International Test Pilot School, Canada, franchudomal@yahoo.com
ALEKSANDAR KNEEVI
Military Academy, Belgrade, aleksandarknezevic75@gmail.com

Abstract: Flight screening or a selection of future pilots is a very sensitive and complex phase in the process of flight
training and pilots education. Most of the known flight screening processes are based on the same requirements but the
aircrafts which are used for selection are different. This paperwork describe some experiences regarding to the aircraft
types used in the flight screening and explains the possible approaches to evaluation of an aircraft which can be
implemented in this flight training phase.
Keywords: Flight screening, training aircraft, evaluation.
become military pilots [1]. The second stage is basic
flight training. It practices fundamental skills of handling
the plane and guiding the plane through the airspace. It
consists basing handling, aerobatic flying, navigation
flying, and sometimes formation flying. The total flight
time in this phase of training is between 80-130 flight
hours, depending on type of plane and syllabus.
Generally, piston and turboprop powered plane are used,
and the jet plane in this phase becomes rarity. Jet training
planes is typical for the advanced flight training, as a third
stage. In every case, the main tool in the process of
making a pilot, throughout the flight training, is an
adequate training plane or simply trainer. The flight
screening or ab-initio training is the very first stage of
flight training. It is a flying-based assessment of potential
candidates. This is intended to establish whether the
student has the necessary aptitude to become a military
pilot in a reasonably short time. It will eliminate, for
example, those who get airsick or lack coordination or
judgment. Experienced instructors make qualitative
assessments of applicants. This goal is almost the same in
all air forces throughout the world.

1. INTRODUCTION
Flight screening or a selection of future pilots is a very
sensitive and complex phase in the process of flight
training and pilots education. Most of the known flight
screening processes are based on the same requirements
but the aircrafts which are used for selection phase are
different. The difference emerges as a consequence of
budget constraints, attitudes regarding to flight training,
specific demands on pilot training, syllabus, political and
industry reasons and some other, less important
circumstances. Diversity of criteria has a great impact on
decision on which type of aircraft can or even must be
used in flight training system. But also, there are some
aircraft characteristics which cannot be ignored and some
categories of training aircraft which can be avoided in
flight screening. This paperwork describes some
experiences regarding to the aircraft types used in the
flight screening and explains the possible approaches to
evaluation of an aircraft which can be implemented in this
flight training phase. This comes in the moment when
domestic air force has to change its existing fleet of light
piston engined trainers intended for pilot selection.

Table 1: Piston engine trainer characteristics

2. FLIGHT SCREENING

Aircraft

Flight training is a course of study used when learning to


pilot an aircraft. Given the expense of military pilot
training, air forces typically conduct training in phases to
wash-out unsuitable candidates. The cost to those air
forces that do not follow a gradated training is not just
monetary but also in lives. Flight training is generally
performed in three stages: primary, basic and advanced.

Grob 115

Da 20

Zlin 242

G-120

Lasta

139

93

149

190

224

Performance
Powerplant, kW
Length, m

7.54

7.16

6.94

8.11

7.97

Wingspan, m

10.00

10.87

9.34

10.18

9.70

12.20

11.61

13.86

13.3

12.9

Wing area, m

The first stage, primary or ab-initio training, is used to


filter those students who lack the aptitude to quickly
90

POSSIBLEAPPROACHESTOEVALUATIONOFTRAININGAIRCRAFTSUSEDINFLIGHTSCREENING
Empty weight,
kg

685

529

745

1100

850

Useful load, kg

320

271

250

360

200

185

256

236

319

310

Max.speed,
km/h
Rate of climb,
m/sec
Service ceiling,
m
Take off
distance, m
Landing
distance, m

5.3

5.08

5.5

6.5

8.5

3050

4000

5500

5486

6000

461

550

565

654

500

457

450

495

562

600

Range, km

1150

1013

1056

1176

Price, USD

400.000

250.000

250.000

1.300.000

800.000

OTEH2016

142/242, T-67 Slingsby Firefly, Valmet L-70, Saab


Supporter MFI 17, Grob G-120A and Utva-75. For many
decades after the World War II there was considerable
agreement that the primary training phase demanded an
aircraft of around 150 kW, with fixed gear and side by
side seating. The other two subcategories are more
familiar with military training and can cover up to the 100
flight hours of syllabus. First considerable trainers in
these subcategories have emerged during the 1970s and
80s. It was Italian SF.260 which has been followed by
TB.30 Epsilon, T-35 Pillan and Lasta 1. They are also
characterized by more powerful engines of around 250
kW. For training aircraft, side by side seating has the
advantage that enables pilot and instructor seeing each
others actions, allowing the pilot to learn from the
instructor and the instructor to correct the student pilot.

For this task it is necessary to implement a training


airplane with appropriate characteristics. Trainer can not
be expensive for operation, complexity has to be on low
level, and flight characteristics must be benign for the
beginners. It must adequately support and validate the
pilot selection system by confirming which students are
likely to succeed in the later, more costly phases of pilot
training.

There are generally smaller numbers of primary trainers in


service than basic or advanced trainers. Nevertheless there
are significant fleets of certain types in operation; Grob 115
(205), Grob G-120 (90 + 48 on order), T-35 (91 + 9 on
order). In addition, the latest generation of Diamond DA20/40/42 and Cirrus SR.20/22 aircraft are being procured in
increasing numbers; DA-20 (57 + 4 on order), DA-40 (37),
DA-42 (13), SR.20 (37), SR.22 (39) [3].
Historically, change has come extremely slowly to this
category. Many of todays airframes were designed 50
years ago and the biggest change that has occurred since
then was moving the tail wheel from the back of the lane
to the front. The only other major visible changes have
been to the navigation receivers in the cockpit. Also, in
the year period beginning in 2003, the general aviation
industry converted to from shipping no glass cockpits at
all to equipping approximately 90% of all new small
airplanes with glass cockpits [4]. However, assessment of
potential candidates for military pilot, does not necessary
need glass cockpit, especially in time limited ab-initio
training. Digital cockpit inevitably demands more time for
initial technical preparations. That is the reason why some
of users insist on analogue cockpit [5].

3. TRAINING AIRCRAFTS
The main tool in the process of making a pilot, throughout
the flight training, is an adequate training plane or simply
trainer. There are many different types of training aircraft,
especially in the primary and basic flight training phase.
Also, there are many different divisions of the training
planes, but the most important and common used is the
division according to the type of powerplant, which
determines its main characteristics and performances [2].
The training aircraft mainly applied in ab initio training is
driven by piston engine. They are also called primary
trainers. Some of the piston engined trainer specifications
are shown in Table 1. This paperwork doesnt consider
ultra light aircrafts (ULA), because of their characteristics
and their irrelevance for serious system of military pilot
training.1

4. EVALUATION PREREQUISITES
Different air forces have a variety of specific prerequisites
and needs regarding to the trainer which is going to be
used for flight screening.

Airplane piston engines of today are, generally speaking,


simple, air-cooled, horizontally opposed, four-stroke
internal-combustion devices with low operating speeds
and low specific output. Trainers driven by the piston
engine belongs to the sort of the most economical trainers.

Prerequisites can be divided in several groups as military,


political, economical and educational. There are some
postulates about the trainers which can be used in ab initio
training. According to the specific situation the trainer
choice can be hardly influenced by many reasons which
are not connected by trainer capabilities. Military users
usually have their so called tactical and technical
requirements, but industry holdstakers and political
influences can be decisive factor on a trainer selection,
also as budget constraints. For this reason, inappropriate
trainer can be implemented into the flight screening
process. In such cases, there can be pressure to use cheap
ULA plane, or a new trainer acquired for the next training
phase. Anyway, military requirements have to be priority.

Piston engined trainers are often divided based on the


aerodynamical and cockpit configuration by following:
fixed landing gear, side by side seats,
retracting landing gear, side by side seats,
retracting landing gear, tandem seats.
The first subcategory is intended for the very basics, and
it is represented by light aircraft not too dissimilar from
civilian training aircraft. Representatives are ZLIN1

Analyzing materials from the Military Flight Training


Conference held in March 2016th in London it can be seen that
relevant air forces with developed training system dont use, not
even consider implementation of ULA.

Except the tactical and technical requirements many other


factors have to be considered such as available money,
91

OTEH2016

POSSIBLEAPPROACHESTOEVALUATIONOFTRAININGAIRCRAFTSUSEDINFLIGHTSCREENING

number of candidates, education time,2 flight training


organization, meteorological conditions and other
resources intended for screening process. It is very
important to have in sight also the type and equipment of
the next trainer in the process. It is unacceptable to use a
trainer with digital cockpit in the flight screening and
aftermath transferring to analog cockpit.
The question therefore arises, as how an air force should
select a primary trainer that will give reasonably low
operating cost and acceptable steps in difficulty between
the different stages of training.
The Serbian Air Force expects such a process in the near
future. The airplane, which is used in the flight screening
of Serbian Air Force future pilots, is UTVA-75,
domestically produced piston-engine trainer. This plane
belongs to the older generation of piston-engine trainers
and new trainer has to replace it before the end of 2017th
[6]. One of the main stream options is to use Lasta as a
primary trainer. Avoiding the option that Lasta trainer is
going to be used for primary training, because of its
complexity and consequently prolonged time of selection,
there are some other limitations which have to be
considered.

Picture 1: Utva-75
According to existing experience and methodology of
aircraft evaluation there are some requirements and
prerequisites that decision makers have to have on their
minds.
Flight screening with a syllabus of only 12 flight hours
including VFR basics consider moderate performances
and complexity with fixed landing gear, spacious side by
side cockpit with military type dual controls3 and good
field of view in all maneuvers. Take off and landing
distance, as a climb performance is not of great influence.
But, the cruising speed of more than 130 kts is not
desirable. Control harmony throughout the envelope is
very important with good harmony of roll and pitch
forces. The priority list also includes good static and
dynamic stability and lateral/directional control without
PIO tendency. The plane must be a good platform for
teaching students for a critical task of trimming for
airspeed. Control forces have to allow an easy transition
to larger aircraft.

For over 30 years, UTVA-75 covering the ab initio phase


training which consisted of 10 flight hours. For the last 15
years, UTVA-75 also covered some parts of basic flight
training. This was imposed by loosing all Galebs G-2. The
cockpit of UTVA-75 was spacious, handling characteristics
were satisfying, stall behavior was benign but the main
restriction was impossibility of performing basic aerobatic
maneuvers including spin. Assessment of young pilots was
difficult especially to cadets who have to be streamed to the
fast jet trainer Supergaleb G-4. Any kind of night or IFR
flying was impossible, so the first introduction night and IFR
sorties are on the advanced jet trainer.

Stall and spin performances represents the category for


itself. High angle of attack flight regime requires clear
stall warning. Easy recovery of stalling is also must be
priority. Conventional and predictable entry to the spin is
highly desirable as a mild spin rotation speed and
oscillatory in the first two spin turns. Mild spin rotation
speed per turn for such plane is 4-6 sec.

According to the existing Military aviation study


program, the maximum period of time intended for the
flight screening is two months including ground school
and technical introduction lectures about plane, its
construction, systems, manuals etc. It means that training
scope is maximum 12 flight hours (VFR only) and during
that time the plane with high performances and workload
can not be implemented. The plane which has retracting
landing gear, tandem cockpit with full digital equipment
and systems is not desirable and doesnt provide
possibilities for objective assessment. Such configuration
is usually linked to a heavier trainer. The beginners
workload is also high and candidate capabilities are
overstressed.

Primary trainer has to provide good glide performance


which allows forced landing procedures training.
Syllabus must be adjusted to provide the student time to
learn to fly safe approaches, so the plane has to provide it
in an acceptable time scope and manner. Up to 80kts final
approach speed is desirable. no power approach with no
flap is highly desirable.
In this particular case, the distance and altitude of training
airspace is not of big influence.

Analyzing UTVA-75s (Picture 1) possible successor the


step forward has to be made with aerobatic and spin
performing availability. The successor has to be
considered in a light of a Lasta plane as a next step in the
flight training syllabus.

5. EVALUATION OF TRAINING AIRCRAFTS


USED IN FLIGHT SCREENING
Evaluation is a systematic determination of a subjects
merit, worth, significance, using criteria governed by a set
of standards.

Flight screening is usually a step before entrance in the


military pilot school or Military academy. In the case of Serbian
Air Force it couldnt last more than a two month.

Pilot sticks rather than yokes and separate engine control for
each pilot.

92

POSSIBLEAPPROACHESTOEVALUATIONOFTRAININGAIRCRAFTSUSEDINFLIGHTSCREENING

Evaluating a complex technological product such as


training aircraft needs the systematic approach.

OTEH2016

screening phase must be adequately supported by a


suitable trainer. It has to validate the pilot selection
system by confirming which students are likely to succeed
in the later, more costly phases of pilot training.

There are different approaches to the process of training


airplanes evaluation. Usually, it is done by national test
centers, specialized commercial testing organization or
ad-hoc commissions. Sometimes the evaluation is done
only by analyzing promo materials or under political
pressures which is unacceptable and very detrimental on
the training and education system.

Different models of flight training system consequently


mean a trainer with different characteristic. In our case,
the possible successor of UTVA-75 has considered, with
all necessary prerequisites in this specific case. Having in
mind the syllabus scope, goal and available time for
selection combined with next step trainer characteristics
(Lasta) the profile of desirable primary trainer was
created. According to this, ULA and Lasta as primary
trainers were not considered because their characteristics
are not fit for this role. However, it doesnt mean that in
necessity such plane cant be used.

According to experts experience and reviews, the only


legal and proper way is a testing on a ground and on the
air. The scope of evaluation depends on different
circumstances especially on evaluation object itself. In the
case of primary trainer and its level of technical
complexity minimum three test sorties are required. The
composition of flight testing program considers totaling
two and a half flight hours, at least.

By highlighting the syllabus characteristics, equipment


profile, aircraft configuration, desirable performances and
handling characteristics, the basis for the thorough
evaluation is made. It shouldnt be allowed to evaluate a
plane with ad hoc commissions or only by analyzing
commercial proposals. The only proper way is evaluation
done by experienced experts (test pilots and engineers)
and experienced test organizations, included those
commercial based. The test program must include, at
least: design features, cockpit, performance, handling
qualities and systems. It requires minimum three test
sorties.

The test team must be experienced, with broad experience


on such airplane category. This is one of the most
important conditions for objective evaluation. Having test
facility doesnt necessary mean broad experience on such
plane category. Test pilot schools have different training
modules and courses providing training for test pilots,
flight test engineers, flight engineers and technicians
involved in flight testing. Schools also provide light
aircraft test pilot courses.
Usually, testing is divided on several groups of evaluation
elements:
- The role of the aircraft
- Design features
- Cockpit
- Performance
- Handling qualities
- Systems.

This methodology is not new and it doesnt bring any


novelty, but it must be followed with no exceptions.
Everything else could lead to the wrong choice and
misdecision.

References
[1] Braybrook,R., Valpolini,P.: Trainer order in
prospect, Armada International, 1/2011.
[2] Vlai,S.: Development perspectives of piston and
turboprop trainers and their COIN derivatives,
International Scientific Conference on Defense
Technologies OTEH 2012, Belgrade, 2012.
[3] Market Report 2016, Part I, Military Flight Training,
Defence IQ, London, 2016.
[4] Trescott,M.: Max Trescotts G1000 Glass Cockpit
Handbook, Glass Cockpit Publishing, Mountain
View, CA 94040, 2006
[5] https://www.flightglobal.com/news/articles/in-focusgrob-aircraft-bullish-over-more-g120tp-sales373317/ accessed 30th July 2016th
[6] Vlai,S., Roenkov,S., Kneevi,A., Vlai,I.: Use
of the commercial software tools in the preparation
phase of the military pilot education and training, V
International Conference of Information Technology
and Development of Education (ITRO 2014),
Zrenjanin, 2014.
[7] Braybrook,R.: Trainers at a Cusp, Armada
International, 5/2009.
[8] The Market for Military Fixed-Wing Trainer Aircraft
2011-2020, Forecast International, Newtown, USA,
2011.

Main evaluation elements inside groups are as follows:


Cockpit: cockpit access, cockpit size, reachibility of
controls, Student and instructor arrangement, field of
view, Command control dimensions and geometry.
Performance: Touch and go distance, climb
performance, cruise performance, glide performance
(propeller effects).
Handling qualities: Ground procedures, taxiing, take-off,
maneuver flight, stall warning, stall characteristics,
aerobatics, spin entries and stabilized spins, spin
recoveries, approach, landing.
Systems: Landing gear, flaps, engine other existing
subsystems (for example INS/GPS, AHRS, EFIS, TCAS).
All those elements have to be closely connected with
tactical-technical
requirements
and
evolution
prerequisites.

6. CONCLUSION
The flight screening or ab-initio training is the very first
stage of flight training. This is intended to establish
whether the student has the necessary aptitude to become
a military pilot in a reasonably short time. Flight
93

POSSIBLEAPPROACHESTOEVALUATIONOFTRAININGAIRCRAFTSUSEDINFLIGHTSCREENING

[9] Vlai S., Kneevi A., Peki N., Comparative


analysis of the analog and the digital cockpit of Lasta
training aircraft, International Scientific Conference

OTEH2016

on Defense Technologies OTEH 2014, Belgrade,


2014.

94

INTEGRATION OF TACTICAL - MEDIUM RANGE UAV


AND CATAPULT LAUNCH SYSTEM
ZORAN NOVAKOVI
Military Technical Institute, Belgrade, novakoviczoca@gmail.com
ZORAN VASI
Military Technical Institute, Belgrade, vti@vti.vs.rs
IVANA ILI
Military Technical Institute, Belgrade, ivilic76@yahoo.com
NIKOLA MEDAR
Military Technical Institute, Belgrade, nmedar9@gmail.com
DRAGAN STEVANOVI
Military Technical Institute, Belgrade, stevanovic.dragan.daca@gmail.com

Abstract: In this paper the selection of appropriate UAV catapult launch system from the supply on the world market for
existing tactical - medium range Unmanned Aerial Vehicle (UAV) in the Serbian Army is analyzed. A special emphasis was
placed on UAV structure strenght at longitudinal acceleration direction which are exposed onto the launch ramp of the
catapult. In addition, UAV accommodation to the catapult launch carriage is analyzed. The ultimate goal of this analysis is
to define the necessary changes to the structure of the existing UAV in order to successfully integrate it with the selected
catapult launch system.
Keywords: UAV, launching ,launching device, catapult, UAV integration redesign.
During their use, the advantages and disadvantages of each
type of individual LDs were recognised. This led to the
conclusion that an LD must be lightweight, must be able to
be operated with minimal personnel, and must have a small
storage volume. These factors need to be considered and
incorporated into the conceptual design of LDs. Also, the
UAV launching device must have the possibility to be set up
and to launch a UAV within fifteen minutes, [3]. The
important factor is the purchase price and the cost of LD
maintenance, which perhaps is crucial to the military
budget.

1. INTRODUCTION
Increasingly the development of unmanned aerial vehicle
(UAV or aircraft) introduces a new term: unmanned aerial
system (UAS) which is considered a hybride system. UAS
comprise UAVs (one or more), ground control station
(GCS), data processing system (DPS), launch and recovery
system. Each of these subsystems is being developed in
several directions so that the number of possible
combinations of hybrid UAS definition grows significantly.
The need to UAV catapult launch arose for the following
reasons. First, UAV catapult launching eliminates
requirements for a take-off path. This is of crucial
importance in combat conditions when the availability of
the runway is very uncertain. Further, the propellers, being
of fixed pitch, would have to be designed for best
performance for take-off. This would severely compromise
the performance of the aircraft in flight, [1]. In addition,
UAV take-off from the runway requires additional fuel
capacity which increases the weight of the aircraft.

All the above-mentioned concepts, regardless of their


relative advantages and disadvantages, are in operational
use in armed forces of a number NATO countries. They are
used in those situations when their advantages come to the
fore.

2. UAV LAUNCHING DEVICE SELECTION


The launching device main task is to hand over to UAV the
previously accumulated energy in its system, so that the
UAV at the time of leaving the catapult has a speed of at
least 15% greater than the stall speed for a given
configuration of the UAV. To have a successful take-off, the
UAV should have sufficient lift force after the instant of
leaving the catapult when its own driving propeller achieves
stable flight take over, [4].

Several systems of launching devices (LDs) for unmanned


aerial vehicles have been developed so far. Existing LDs
could be grouped into six categories: (1) Pneumatic, (2)
Hydraulic, (3) Bungee cord, (4) Kinetic Energy, (5)
Electromagnetic, (6) Rocket Assisted Take-off (RATO) and
other methods, [2], [3], [4].

95

INTEGRATIONOFTACTICALMEDIUMRANGEUAVANDCATAPULTLAUNCHSYSTEM

Table 1. List of LD essential requirements, [3]


Final Launch Velocity
Maximum Take-off Mass
Operational Temperature
Range
LD Mass
Maximum Length of the
Launch Envelope
Acceleration at Launch
10G

OTEH2016

The launch parameters of PEGAZ (PEGASUS) UAV:


Max take-off mass: ............................................250 kg,
Max take-off speed: ...........................90 km/h (25 m/s),

Launching Angle Range


Launching Remote Control
Set Up Time < 15 minutes

require LD with appropriate launch performance and


solution should be sought in the selection of:
Pneumatic Catapult,
Hydraulic Catapult,
Hybrid Hydraulic-Pneumatic Catapult,
Rocket Assisted Take-off (RATO).

LD Disassembling for
storage
Number of the Set Up
Personnel
LD Safety, Reliability and
easiness to operate

From the current range of UAV LDs in the global market,


[2], [5], [6], which meet the criteria set out in the Table 1,
for PEGAZ the following UAV LDs could be taken into
consideration:

According to their launch performance pneumatic (or


hydraulic) LDs are perhaps the best because they have a
huge catapults launch power and because they can launch
multiple types of UAV (reconnaissance, aerial target)
designed for different missions. They are characterized by a
relatively uniform acceleration along the launch rail. The
disadvantage of this type of LDs is the complexity of their
construction, which reduces their reliability. The purchase
price of pneumatic (or hydraulic) LD is relatively high
compared to other LD concepts, as well as higher
maintenance costs.

1. MDS HERCULES pneumatic launcher (England)

RATO (Rocket Assisted Takeoff) launching device is


characterized by nearly zero length launch ramp and
relatively higher level of acceleration. This is the most
reliable drive for the UAV launch in extreme environmental
conditions, for higher mass of 500kg UAV and in conditions
of scarce space to launch (boat deck). However, rocket launch
has the disadvantage that the rocket bottle is rejected from
UAV (2 3) seconds after the start, which is unacceptable
from the ecological standpoint . Rocket bottles must be stored
in separate areas in the same conditions as explosive. This is
inconvenient as it imposes additional costs and liabilities. In
addition, the rocket-powered launch reveals a position that is
unfavourable in combat conditions, [3].

Picture 1. MDS HERCULES pneumatic launcher with


aerial target BANSHEE

Max UAV launch mass, [7], [2]...........................250 kg


Max UAV launch speed.......................................55 m/s
2. Aries RO-01 or Aries ALPPUL LP-02 pneumatic
launcher (Spain)

Bungee catapult systems employ the characteristics of


stored energy within high powered, highly elastic bungees
to launch UAV. Concept of LD with elastic cords is
characterized by the simplest design structure in relation to
other concepts of LDs. Because of its simple construction
they have the lowest purchase price and the lowest cost of
maintenance. Generally, bungee catapults are considered as
LDs of less launch power (max. take-off weight 50 to 55kg),
[3], except the one that is in operational use in NATO and
that stands out for its launch performance (take-off weight
140kg, take-off speed 34m/s). Disadvantage of bungee
catapult is the initial jerk that can unfavourably affect the
sensitive payload of UAV, but it is successfully overcome
by additional fixing equipment inside the aircraft, [1].

Picture 2. Aries RO-01 pneumatic launcher with UAV


SIVA

When choosing a catapult for Serbian army (regardless of


whether being purchased or produced) must be primarily
met all the criteria set out in Table 1, which must be in
accordance with the characteristics of UAV (maximum
launch velocity, maximum launch weight, maximum mean
acceleration in the axial direction that UAV withstands). We
should endeavour to catapult selection being based on the
multi catapult, with which it will be possible to launch the
current UAVs and UAVs that will eventually be the future
home development.

Picture 3. Aries ALPPUL LP-02 pneumatic launcher


with UAV SIVA
96

INTEGRATIONOFTACTICALMEDIUMRANGEUAVANDCATAPULTLAUNCHSYSTEM

Max UAV launch mass, [8], [2]...........................360 kg


Max UAV launch speed.......................................34 m/s

OTEH2016

Launch mass........................................... (220275) kg


Launch speed............................................. (2138) m/s

3. ESCO-Zodiac HP-3407 hydraulic-pneumatic launcher


(USA)

Picture 6. Speed/Acceleration changes onto the catapult


ARCHER ramp

Picture 4. ESCO-Zodiac launcher HP-3407


Max UAV launch mass, [9], [2]............................340 kg

Speed/Acceleration profile Pic. 7, is relating to UAV SIVA


launch with the launch parameters:
Launch mass........................................................ 300 kg
Launch speed.................................................... 33.6 m/s

Max UAV launch speed........................................33 m/s


4. ARCHER hydraulic launcher (Switzerland)

Picture 7. Speed/Acceleration changes onto the catapult


ALPPUL LP-02 ramp

Picture 5. ARCHER hydraulic launcher

Max UAV launch mass, [10], [2].........................320 kg


Max UAV launch speed.......................................34 m/s

Aircraft RANGER mean acceleration on ARCHER launcher


rail is close to 5g while the mean acceleration of aircraft
SIVA on ALPPUL LP-02 launcher is around 6g. It is
evident that the launch speed on both diagrams are
practically the same, about 33 m/s, wherein the launch mass
of aircraft SIVA is higher for approximatelly 50 kg relative
to the launch mass of RANGER aircraft. It can be concluded
that a lower level of mean acceleration from 1g on
ARCHER catapult is result in differences of aircraft launch
mass (~ 50 kg).

5. Launching of PEGAZ UAV by rocket assisted take-off


catapult should be taken into consideration only if none of
the above launchers meet the requirements, (Table 1.).

3. ANALYSIS OF UAV STRUCTURE


STRENGTH ONTO THE LAUNCH RAMP
OF THE CATAPULT

Since the launching parameters of PEGAZ UAV are closer


to launch parameters of RANGER UAV, for this calculation
it could be adopted that the acceleration profile (Pic. 6) is
more credible in relation to the acceleration profile (Pic. 7).
No matter the launch speed of being ~ 33 m/s, Pic. 6, (much
higher than 25 m / s, which requires PEGAZ UAV), one can
expect a similar level of acceleration when the PEGAZ
UAV would be launched with a ARCHER catapult. This
conclusion should not reduce the quality of ALPPUL LP-02
catapult, as the acceleration profile Pic. 7 refers to the
heavier aircraft in relation to the acceleration profile Pic. 6.

In selection of catapult for the aircraft PEGAZ UAV, the


first step is to check the strength of the UAV existing
structures during acceleration on the launch ramp of the
catapult. In addition, it is necessary to check the sensitivity
of UAV payload and its fastening inside UAV, but this
analysis is beyond the scope of this paper. The actual
analysis is performed according to the available acceleration
profiles of ARCHER ("RUAG"-Switzerland), Pic. 6,
launcher and ALPPUL LP-02 ("ARIES" Spain), Pic. 7.
Speed/Acceleration profile Pic. 6, is relating to UAV
RANGER launch with the launch parameters:
97

INTEGRATIONOFTACTICALMEDIUMRANGEUAVANDCATAPULTLAUNCHSYSTEM

OTEH2016

3.1. Necessary changes on PEGAZ structure for


catapult integration
PEGAZ UAV airframe is designed as monoplane, highwing with fuselage nacelle and two tail booms with one
horizontal tail and two vertical tails. Aerodynamic scheme
with pusher propeller enables safe maintenance and
operation before and after flights without dangerous contact
with rotating propeller. The aerodynamic scheme enables
payload and mounting of equipment into fuselage nose
section.

Picture 8. PEGAZ side layout (view without wing, tail


booms and tail units)

PEGAZ UAV airframe is made of modern composite


materials. Sandwich composite panels are used for airframe
parts with complex shapes in the outer UAV skins and for
simple internal structure as well. There are numerous local
stiffeners made of laminated fabrics and laminated wood
plates at the points of concentrated loads. Original project
design defined UAV airframe according to dominated flight
loads in flight envelope, and according to extreme loads that
can develop during landing on UAV landing gear and
parachute landing in the case of emergency. First phase of
UAV airframe conceptual design does not have tactical and
technical requirements for take-off using catapult system or
landing using arrester hook or air bag. In other words,
design of parts, assemblies and the whole UAV airframe is
dimensioned according to aerodynamic and inertial loads
that are developed during flight.

Fuselage nacelle is consisted of four composite stringers


spreading along the whole fuselage, from nose frame
(Frame 1) to firewall frame (Frame 7). Stringers (longeron
type) with trapezoidal shapes in cross section are made of
foam core and two layers of glass fabrics. Positions of
stringers in fuselage cross section are chosen in a way to
fully encompass large fuselage openings, which reinforce
fuselage skin together with skin doublers. Fuselage frames
are cut in the points of stringers interconnections.
Fuselage cross section is hexagon-shape having horizontal
upper and lower skins. Lateral skins of fuselage cross
sections do not have vertical sides. These complex shapes
seriously complicate integration of added structure
reinforcement intended to receive take-off catapult driven
loads, avionics equipment units, and payload into UAV
airframe.

Fuselage nacelle design characteristics


Fuselage nacelle together with central part of wing makes
one-part technological and constructive assembly. Fuselage
nacelle is made of several sections, such as:
Nose front fuselage section with keelson assembled for
mounting of nose landing gear, avionics and part of
hydraulic brake installations.
Front central fuselage section is dedicated for embedding
of UAV flight navigation equipment, and for mounting
of reconnaissance equipment.
Central fuselage section is used for embedding of front
fuel compartment and part of hydraulic brake
installations.
Central fuselage with central part of wing section, part
and starboard, is used for embedding of integral fuel
tanks and for attachment of fittings for tail booms and
outer wing assemblies.
Rear central fuselage section is used for embedding of
rear fuel compartment and parachute compartment for
emergency landing, and for attachment fittings for main
landing gear composite bar spring.
Rear fuselage section is used for UAV engine and engine
equipment.

There is the opening in the lower fuselage skin allowing


nose landing gear movement during extension and
retraction. There is circular opening between frame 3 and
frame 4 for attachment of gimbaled optoelectronic payload
set. There are two hard points for attachment of main
landing gear composite spring bar, just behind the main
fuselage to wing bulkhead (frame 5). Main landing gear
composite spring bar is attached to modified lower fuselage
skin by attachment fittings and bolts. Attachment fittings
enable freely elastic deformations of composite spring bar
during landing, absorbing vertical UAV energy (Pic. 9). The
design of fuselage to landing gear attachments could not
provide robust receiving of longitudinal loads induced by
catapult system. Thats way the landing gear attachment
points are not taken into consideration for receiving catapult
take-off loads.

Fuselage sections and UAV avionics attachment that meet


aircraft weight and balance define internal UAV
arrangement. Internal UAV structure is made of composite
sandwich panels (laminates with foam core) and appropriate
reinforcements at the points where the concentrated loads
are distributed. There are seven frames and two wing ribs at
each wing side in fuselage nacelle (Pic. 8).

Picture 9. Lower part of UAV fuselage with landing


gears and optoelectronic payload

98

INTEGRATIONOFTACTICALMEDIUMRANGEUAVANDCATAPULTLAUNCHSYSTEM

Other UAV airframe assemblies (outer trapezoidal wings,


tail booms, horizontal and vertical tails) are made of modern
sandwich-type laminated composite materials with
appropriate reinforcements at the points where the
concentrated loads act the airframe, [11], [12], [13].

OTEH2016

and the same length as its internal counterpart support made


of laminated wood reinforced with glass fabrics. Mechanical
connection of outer support and inner counterpart support is
made by three bolts.
Chosen solution for connnection of fuselage nacelle and
catapult launch carriage system at the frame four and six is
optimal in the sense of meeting different contradictory
requirements related to take-off by catapult system, [14], [15]:
Design solution enables production of lighter, shorter and
compact-made catapult launch carriage assembly,
Center of gravity (CG) of the whole UAV is located
almost at the midpoint between the two supports,
Fuselage nacelle and catapult launch carriage
interconnection does not make any interference with
main landing gear spring bar and its bolt connections to
the frame structure. Main landing gear spring bar to
fuselage structure attachment is designed for vertical
loads generated during landing and not for loads
longitudinally distributed along the frame nacelle
generated by catapult system.

Analysis of possible locations of airframe


reinforcement and integration with catapult system
The optimal location for UAV structure and catapult launch
carriage interface unit must be chosen and designed carefully.
Fuselage nacelle is relatively long in comparison to the whole
UAV length, so the first solution for interface attachment
fittings positioned close to the first and last fuselage frame
was excluded from further consideration because the catapult
launch carriage should be in that case too long and
cumbersome. Two central fuselage frames, fourth and sixth,
were chosen as an acceptable solution for fuselage to catapult
launch carriage attachment and interface unit.

Picture 11. Redesigned structure elements at front


catapult launch carriage support
Main (front) catapult launch carriage support at the frame
four of the fuselage nacelle is designed for receiving of
horizontal and vertical loads (Pic. 11). Conceptually, the
main support is designed as one transversal steel tube which
is actually made of two tubes, port and starboard, connected
by one connecting muff at the fuselage symmetry plane
(center line). Diameter of the tubes is 25 mm and tube wall
thickness is 3 mm. At the fuselage center line, the axial
movement and rotation of the tubes are constrained by two
bolts and connecting muff. Connecting muff is attached to
new-added longitudinal fuselage rib at the fuselage center
line by bolt connections. The main vertical and horizontal
loads are transferred from the two tubes to the fuselage
structure through the two main supports with bushing, port
and starboard, made of laminated wood plate reinforced by
glass fabrics. Two main supports with bushing are shaped to
align to the inner surfaces of the fuselage skins and to the
lower trapezoidal-shaped fuselage stringers. Main supports
have the hole close to the frame four. Metal bushing is
embedded into the main support hole. Inside diameter of the
metal bushing is with high tolerance to the outside diameter
of the steel tube. Metal bushing (on the port and starboard
side) has flange that acts as an axial limiter protecting from
hard impact of catapult launch carriage steel arms to fragile
UAV composite skin and its consecutive damage.

Picture 10. Fuselage internal structure layouts with


catapult launch carriage supports
Frame four of fuselage nacelle is used as the rear support of
the gimbaled optoelectronic payload platform, in the lower
part of the frame, and as the vertical support of two front
emergency parachute brackets, in the upper part of the
frame. According to original design, frame is made of 5 mm
light foam core and four glass fabric plies, two from each
side, and with local reinforcements made of laminated wood
plates at the areas where the concentrated load acts. Actual
modification of the frame meant additional cutting of the
frame in the lower area where the stringers and new-added
main wood support with bushing go through the frame (Pic.
11). Main wood support with bushing is the main structural
part that receives the main longitudinal load generated by
catapult launch system.
Frame six of fuselage nacelle is made of light foam core and
glass fabrics. Frame is used as the rear support of two rear
parachute brackets, in the upper part of the frame. Rear
support is designed to receive vertical loads (along z-axis) at
the frame plane, and not along the other axis. Frame
modification meant embedding of new-added counterpart
support made of laminated wood plate at the place where
the outer catapult launch carriage support is located.
Exterior catapult launch carriage support is made of
tetrafluoroethylene transversal bar with L cross section

Main landing gear conception and its position to the catapult


launch carriage supports require additional design efforts
related to resolving of kinematics of the catapult launch
99

INTEGRATIONOFTACTICALMEDIUMRANGEUAVANDCATAPULTLAUNCHSYSTEM

carriage arms that will avoid collision of landing gear,


fuselage structure, optoelectronic payload, and engine
propeller.

Strength calculation of attachment of catapult launch system


to UAV is done by using classical engineering methods.
Reserve factor, which represents ratio of ultimate stress for
material that was used and stress in the element of structure,
is greater than 1 on all elements for conection of the
fuselage nacelle and the catapult launch system, at the frame
four and six.

3.2. PEGAZ Strength Analysis


In case of take-off using catapult launching device the
strength of structure of the aircraft PEGAZ UAV must be
taken into consideration. All joints of main parts of the
UAV structure and all elements that have attached
concentrated mass have to be designed for a load case when
inercial force is applied, in direction of x axis of the UAV.

3.3. Launch Carriage Interface Requirements


The main task of launch carriage interface is to support
UAV during the launching phase. Also launch carriage
mechanical interface is essential in the System Preparation
and Preflight State until launch. The UAV is held in place
during the preflight operations, including checks and engine
ground run. At the launch phase, the UAV is pushed to
flight speed and at the end of the launch phase, the UAV
leaves the launcher in the flight direction.

The value of inertial force is calculate by following:


Fin = nx g m

(1)

where nx=8 represents the factor of inertial load in direction


of x axis, and m represents mass of the vehicle.

Mechanical Interface Requirements, [10]

Besides that, the ultimate factor must be taken into


consideration, j=1.5. For all vital joints the ultimate factor
has to be multiply by the joint factor, jv=1.2.

Mechanical interface operating is sequentially distinguished


in three phases:
Loading the UAV onto the launcher.
The UAV is held in static position with only gravitational
forces and also combined with the engine thrust. This
requires longitudinal fixation of the UAV in both
directions.
The UAV is accelerated by the launcher and slides out of
the guidance at the end of the launch. Longitudinal
fixation of the UAV only in backward direction is
required.

To calculate strength of the joint between the UAV and


catapult launching device (for maximum mass of the UAV
m=250 kg) the value of the force that is introduced to the
joint is
Fx = j jv Fin = 35316 N

(2)

This force is applied in center of gravity, aligned with x axis


of UAV, in postive direction of the axis.

Physical Requirements

Force, Fx, is reduced to the main catapult launch carriage


support, at the frame four of the fuselage nacelle, which
causes bending moment My with intesity:
M y = Fx 0.214 = 7558 Nm

Dimensions
The dimensions of the interfacing elements of the PEGAZ
UAV (nominal weight of 250 kg) are given in pictures 10
and 11.

(3)

Forces

Front support is receiving all horizontal load, Fx, while


moment My is defined as a couple of vertical forces (Pic. 12)
that are accepted by front and rear support points:
Fz =

My
0.9

7558
= 8398 N
0.9

OTEH2016

Force for acceleration of PEGAZ UAV: max 22073 N,

corresponding to 9 g with 250 kg PEGAZ UAV mass,


acting in direction of the rail. This force to be
transferred by the forward hooks. The rearward support
serves for guidance of the UAV.
Weight force of the PEGAZ UAV: max 2500 N with
respect to ground vertical.
Center of PEGAZ Gravity Range: max 350 mm in
back of the forward hook.

(4)

4. CONCLUSION
The proposed redesign of PEGAZ UAV enables it to be
launched from any catapult whose max acceleration on the
launch rail is up to 8g. The launch parameters of PEGAZ
UAV (max launch speed 25 m/s, max launch mass 250 kg)
are less demanding than the launch parameters of RANGER
UAV or SIVA UAV. This confirms that the launch catapult
performance of ARCHER or ALPPUL LP-02 makes
possible to launch PEGAZ succesfully. It is realistic to
expect that PEGAZ UAV will be exposed to maximum of
5g average acceleration on the launch rail of any of these

Picture 12. Forces acting on the catapult launch carriage


supports
100

INTEGRATIONOFTACTICALMEDIUMRANGEUAVANDCATAPULTLAUNCHSYSTEM

OTEH2016

[8] Catalogue of "Aries Ingenieria y Systemas" company,


ALPPUL
LP-02
pneumatic
launcher,
Also, the proposed redesign of PEGAZ UAV and set
http://www.ariestesting.com/solutions-byrequirements for launch carriage interface enable launch
applications/unmanned-aerial-systems/uav-launchers/
carriage adaptation on launchers ARCHER or ALPPUL LP[9] Catalogue of "Zodiac Aerospace" company,
02 according to modular dimensions of PEGAZ UAV.
http://s76586.gridserver.com/downloads/documents/hp
3003.pdf
References
[10] Catalogue of "RUAG Aerospace" company,
[1] Austin, R.: Unmanned aircraft systems:UAVS design,
http://ofp.gamepark.cz/_hosted/swissteam/Pictures_of_
development and deployment, John Wiley & Sons Ltd,
Swissmod/archer_flyer.pdf
Chichester, West Sussex, United Kingdom, 2010.
[11] Niu,M.C.Y.: (1992). Composite Airframe Structures,
[2] Novakovi,Z., Medar,N.: Lansirni sistemi bespilotnih
Conmilit Press Ltd., Hong Kong
letelica, Podaci o naoruanju, faktografska sveska,
[12] Aronsson,A.: (2005). Design Modeling and Drafting of
ISSN 1820-3426, VTI, Beograd, 2015.
Composite Structures, Master thesis, Lulea Univerisity
[3] FRANCIS,J.: Launch System for Unmanned Aerial
of Technology, 2005:060 CIV, ISSN: 1402 1617.
Vehicles for use on RAN Patrol Boats, Final Thesis
[13]
Willinger,M.:
(1989), Industrial Development of
Report 2010, SEIT, UNSW@ADFA
Composite Materials: Towards a Functional
[4] Novakovi,Z., Medar,N.: Design of UAV Elastic Cord
Appraisal, Composites Science and Technology 34,
Catapult, OTEH 2014 Symposium, MTI, Belgrade,
pp. 53-71.
2014.
[14] Niu,M.C-Y.: Airframe Structural Design, Practical
[5] Jane's.: Unmanned Aerial Vehicles and Targets, Issue
Design Information and Data on Aircraft Structures,
29, 2007, Launch and Recovery, Chapter.
Conmilit Press Ltd., 1988.
[6] Unmanned Vehicle Handbook, Issue 21, SHEPHARD [15] Stinton,D.: The Design of the Aeroplane, BSP
2013.
str.:
164,165.
Professional Books, Oxford, 1983.
http://shephardftp.co.uk/digital_editions/sample%20pa
[16]
Bruhn,E.F.:
Analysis and Design of Flight Vehicle
ges/UVH_21_sample/index.html#/16/
Structures, Tri-State Offset Company, 1973.
[7] Catalogue of "Meggitt Defence Systems" firm,
http://www.meggittdefenceuk.com/PDF/Hercules%202 [17] STANAG-4671
014_App%20Mod%201.pdf
two launchers.

101

CONTRIBUTION TO THE MAINTENANCE OF MI-8 HELICOPTER IN


THE SERBIAN AIR FORCE
ZORAN ILI
Technical Test Center, Belgrade, zoranilic_65@yahoo.com
BOKO RAUO
Faculty of Mechanical Engineering, Belgrade, brasuo@mas.bg.ac.rs
MIROSLAV JOVANOVI
Technical Test Center, Belgrade, mjovano@sbb.rs
LJUBIA TOMI
Military Technical Institute, Belgrade, ljubisa.tomic@gmail.com
STEVAN JOVII
Technical Test Center, Belgrade, stevanjovicic@gmail.com
RADOMIR JANJI
Technical Test Center, Belgrade, lari32@mts.rs
NENKO BRKLJA
Technical Test Center, Belgrade, brkljacnenko@gmail.com

Abstract: Modern helicopters are equipped with Health and Usage Monitoring Systems (HUMS). Serbian Air Force
helicopters are not equipped with HUMS. One of the most effective ways to control state of helicopter is vibration
measurement, i.e. defining frequency spectrum using appropriate equipment. Based on the analysis of vibrations, measured
on helicopter, it is possible to determine the various types of failures and improper function of component parts or
aggregates. The basic problem is to determine vibration amplitudes that are characteristics of proper work, and which ones
are characteristics for incorrect operation of helicopter. In the case of frequency spectrum disorder due to reduction of
function of certain working elements, amplitudes of vibration will be increased on speeds of these elements. In such
circumstances, the only way to assess condition of helicopter is a comparative analysis of results previously measured
vibrations and present measurements. All measurements are should be done at the same places and equipment with the same
characteristics. This paper presents results of the analysis and vibration measurements, which were performed at
helicopters Mi-8. It was stated a few examples of observed irregularities of individual parts and clear guidelines for taking
action to maintain are given.
Keywords: helicopter vibrations, blade, fault, vibration measurement, maintenance.

1. INTRODUCTION
Rotation of rotor helicopter in a horizontal plane generates
vertical lift force, and depending on its size, helicopter is
stationary in the air or moving up or down [1].
During rotor rotation, on helicopter are generated vibrations
caused by effect of the aerodynamic and mechanical forces.
Asymmetry of vertical lift force on the rotor, during its
horizontal rotation, causes vertical vibration, and asymmetry of
aerodynamic drag force on blades causes horizontal vibration.
In addition, separation of airflow at the ends of blades causes
both vertical and horizontal vibrations, upon high induced
velocities. The vibrations of helicopter may occur as a
consequence of the lack of dynamic and static balancing
blades. Main sources of vibration on helicopter, such as rotors,
are driveline, transmission and connections [2].
102

During life cycle of helicopter, such as the correct exploitation,


it is necessary, create maintenance that ensures high reliability
and integrity of the components and helicopter as a unique
complex systems. The current helicopter maintenance practice
requires a large number of parts to be monitored and replaced
at fixed intervals. This constitutes an expensive procedure
which contribute significantly to maintenance costs [3].
Maintenance of helicopters in the Serbian Air Force meets
those requirements. Unfortunately, maintenance significantly
lags behind the methods of maintenance helicopters in modern
armies of the world. The US army, for example, has developed
program that allows to definine safety system in real time for
different types of helicopters. This program is called Vibration
Management Enhancement Program (VMEP) - a program of
extended vibration management [4, 5]. Based on identified
spectrum of vibration, VMEP determines condition and

CONTRIBUTIONTOTHEMAINTENANCEOFMi8HELICOPTERINTHESERBIANAIRFORCE

OTEH2016

correctness of powertrain helicopters and elements of


transmission power and torque [5].

RPM shaft tail rotor at 2589 rpm (frequency f =43 Hz);

Serbian Air Force helicopters are not equipped by devices


that can continuously monitors vibrations in real time during
flight. When vibration phenomena, that are not common,
appear on helicopter, it is necessary to install a helicopter
measurement equipment, to measure vibrations and make a
comparative analysis of the vibration spectrum in which
there are disorders with normal (regular) vibration spectrum
[6].

Vibrations on helicopter can be divided into vertical (in


direction of vertical Z - axis) and lateral (in longitudinal
direction X -axis, in direction of transverse Y - axis).

Characteristics of gears in main gearbox of helicopter.

In normal (regular) frequency spectrum dominant are


vibrations at BPF main rotor and its higher harmonics of
oscillation (Picture 1). Also, in frequency spectrum exist
vibrations at all working frequencies of rotating elements,
but these vibrations should have smaller amplitudes than
vibrations at BPF main rotor.

Technical Test Center engineers have made the experiment


and performed vibration measurements at frequencies from
0 Hz to 2 kHz on two Mi-8 helicopters in elected positions.
Procedures and methods of vibration measurement have
been described in [7] and [8]. The aim of the experiment
was to demonstrate a part of extensive possibilities which
vibration analysis provide for rapid assessment of the
condition of helicopters. Vibrations were measured in
helicopters, which were attached to the ground during the
work, for several main blade rotor angles (USV), without
autopilot. Frequency spectrum is analyzed for the range
from 0 Hz to 2 kHz, but in this paper is shown spectrum
from 0Hz to 80Hz in which there are certain anomalies in
the frequency spectrum. The case of high-frequency
vibration spectrum which has certain anomalies in the
frequency spectrum is shown. The analysis of vibrations that
measured on two helicopters has determined concrete faults
and has given precise instructions for their elimination.

During regular vibration measurements, should monitor


changes in vibration spectrum on helikopter. In the case of
vibration spectrum disorders due to failure of an working
element, vibration amplitude would be increased at the
frequency of rotation of the element. Therefore, it is
necessary to know frequency of rotation of all working
elements of helicopter.

2. SOME ASPECTS VIBRATION ON


HELICOPTER
During operation of helicopters are always present
vibrations small amplitudes. Helicopter vibrations are
unsteady accelerations of any given location inside the
fuselage, e.g. on pilot seat, co-pilot seat or at a given crew
or passenger seat measured along three mutually orthogonal
axes (as a fraction of acceleration due to gravity, g). The
dominant source of helicopter vibration is the main rotor.
Frequency of vibration caused by the main rotor is at integer
multiples of the rotor RPM - 1 per revolution (1/rev) is the
rotor RPM, then 2/rev, 3/rev and so on. In addition to the
main rotor, other sources of vibration are - the engine/fan
system, the main rotor transmission/drive-shaft/gear system,
the tail rotor and its transmission system and loose
components that are a regular or external part of the aircraft.
Examples are four out of balance rotor blades, loose tail
fins, loose engine shaft mounts, unsecured canopy, landing
gear system or external weapons or cargo systems [9].

Picture 1. Harmonic oscilation of main blades


2nd harmonic b) 3rd harmonic
To determine reasons of increased occurrence of vibrations
on a helicopter, it is recommended to carry out a
comparative analysis of vibrations with disorders in
spectrum, compared to normal (regular) spectrum of
vibration.
Quick comparative analysis of vibrations on Mi-8 helicopter
is done by comparing amplitudes of two characteristic
frequencies: the frequency at 3.2 Hz (frequency of main
rotor - RPM MR ) and the frequency at 16 Hz (Blade Pass
Frequency of main rotor - BPF). Values of amplitude of
vibration at frequency of rotation of main rotor should be
several times fewer than amplitude at BPF, or ARMS (RPM
MR)/ARMS (BPF) <<1.

Vibrations that are usually analyzed on Mi-8 helicopter


occur at:
Revolutions per Minute main rotor (RPM MR) at 192-193
rpm (frequency of main rotor f = 3.2 Hz);
Blade Pass Frequency (BPF) which suits to the product of
N * f (the number of blades N and the frequency of the
rotor f). The helicopter Mi-8 main rotor is equipped with
5 blades so BPF is 16 Hz;

3. EXPERIMENTAL SETUP
The aim of the experiment, which was carried out by
engineers of Technical Test Center (TTC), was to measure
vibrations at Mi-8 helicopter, in elected positions, at

RPM tail rotor at 1124 rpm (frequency f=18,733 Hz). Tail


rotor helicopter Mi-8 is equipped with 3 blades so BPF is
56.2 Hz;
103

CONTRIBUTIONTOTHEMAINTENANCEOFMi8HELICOPTERINTHESERBIANAIRFORCE

OTEH2016

dBFASuite software is installed on this system, that is also


used to record signals from installed piezoelectric
accelerometers, as well as processing of the recorded data.

frequencies from 0 Hz to 2 kHz, and to show part of


extensive capabilities of vibration analysis for rapid
assessment of helicopter condition. Vibrations were
measured in helicopters, which were attached to the ground
during the work, for several main blade rotor angles (USV),
without autopilot.

Accelerometers No. 1, 2 and 3 are mounted at position 1


(down below the outside of fuselage), Picture 3.
Accelerometer No. 4 is mounted on the floor next to the
pilot seats. Accelerometer No. 5 is placed on the floor of
passenger compartment within the zone center of gravity of
the helicopter. Accelerometer No. 6 is positioned on the rear
side of the fuselage, at the beginning of the tail cone.

Table 1. Characteristics of NetdB12 system - 01 Metravib


Input channels
No. channels
12 BNC
Resolution
24 bits
Voltage
AC/DC/AC ICP
20db: 14,1V (10V RMS)
0db: 1,41V (1V RMS)
Range
+20db: 141 mV (100mV RMS)
CHP
>105dB RMS full scale

Characteristics of piezo-uniaxial accelerometers that were


used are given in Table 2.
Table 2. Characteristics of accelerometers
Dynamic Sensitivity Position
o/n
Type
Axes
range (g) (mV/g) (place)
1 CTC 135-1A X
10
500
1
2 CTC 102-1A Y
50
100
3 CTC 135-1A Z
10
500
4 CTC 102-1A Z
50
100
2
5 CTC 102-1A Z
50
100
3
6 CTC 102-1A Z
50
100
4

Vibration spectrum was analyzed from 0 Hz to 2 kHz, but in


this paper is shown area in which there are certain
anomalies in the frequency spectrum.

4. RESULTS
Vibration spectrum at all helicopters was recorded in the
area from 0 Hz to 2 kHz, but in this paper is shown
spectrum from 0Hz to 80Hz, where are some anomalies.
Higher vibration spectrum is seen for concretely occurrence
where are certain anomalies in the vibration spectrum.
Vibrations on helicopter No.1 were measured twice.
Frequency spectrums, which were measured in the first
measurement on the helicopter, position 1, for several main
blade rotor angles (USV) 5, 7, 9 and 11, are shown in
Picture 4.

Picture 2. System NetdB12 - 01 Metravib in a helicopter


Test-measuring equipment type NetdB12 - 01 Metravib,
was used for measuring vibrations on Mi-8 helicopter on the
ground. Technical characteristics of the data acquisition
system are given in Table 1, and Picure 2 that shows the
system embedded in a helicopter.

Picture 4. Frequency spectrums for main blade rotor


angles (USV): a) 5 (1st measurement)
During measurement of vibrations in helicopter No.1,
according to subjective feeling of crew, the helicopter
vibrations were large, so measurement was interrupted.

Picture 3. Three piezo-accelerometers down below the


outside of fuselage - position 1

104

CONTRIBUTIONTOTHEMAINTENANCEOFMi8HELICOPTERINTHESERBIANAIRFORCE

OTEH2016

Picture 4. Frequency spectrums for main blade rotor


angles (USV): b) 7 c) 9 d) 11 (1st measurement)
This crew feeling is confirmed by diagram in Picture 4 d)
where are perceived increased levels of vibration in
spectrum up to 7 Hz.
Checking of the main rotor blades was carried out, after
vibration measurement, and has been found damage in
sealant between segments on Engineers from TTC have
repeated vibration measurement on this helicopter. a blade.
The blade was repaired and main rotor blades were adjusted
in track condition.
Picture 5 shows the frequency spectrum of vibrations,
measured on the same helicopter, after repairs the blade, in
position 1, for USV: 5, 7, 9 and 11.

Picture 5. Frequency spectrum for USV: a) 5 b) 7 c) 9


d) 11 ( 2nd measurement)
Picture 6 shows the frequency spectrum on helicopter No.2,
position 1, for USV 5, 7, 9 and 11.

105

CONTRIBUTIONTOTHEMAINTENANCEOFMi8HELICOPTERINTHESERBIANAIRFORCE

OTEH2016

5. DISCUSSION
Vibration measurement on helicopter No.1 was performed
twice. After the first measurement the following were
found:
Vibration at frequency 3.2 Hz, RPM MR, were dominant
throughout the whole main blade rotor angles (USV).
There are vertical and lateral vibrations;
Amplitudes at frequency 16 Hz, BPF, have usual values.
Vibrations at frequency of 16 Hz in the vertical direction
are dominant in relation to the lateral vibrations;
Vibrations at frequencies 56.2 Hz and 65 Hz are changed
with variation USV, Pictue ratio of amplitudes ARMS
(RPM MR)/ARMS (BPF) at frequencies 3.2 Hz and 16 Hz,
at all USV is unfavorable, which indicates a pronounced
imbalance of the main rotor.
Mentioned relationships of observed amplitudes, indicates
an imbalance of the main rotor and that one blade generates
more lift force than other blades.
After of corrective actions, vibration measurement in the
helicopter were performed once again, and have been found
the following:
Vibration at frequency 3.2 Hz, RPM MR, are significantly
reduced;
Vibrations at frequency 16 Hz, BPF, have become
dominant, especially in direction of Y - axis, for all USV,
followed by vibrations along Z - axis;
Vibrations at frequency 56.2 Hz and 65 Hz are reduced,
which shows that unbalance on the main rotor has an
impact on elements of tail rotor transmission.
The ratio of amplitudes ARMS (RPM MR)/ARMS (BPF) at
frequencies 3.2 Hz and 16 Hz is favorable for all USV, ie.
vibration level at 3.2 Hz is several times less than the level
of vibration on the16 Hz, in direction of all axes.
Based on presented analysis of vibration spectrum in
helicopter, which were measured before and after of
corrective actions, it can be concluded that the main rotor is
balanced, and that the level of vibrations reduced to
minimum, at all frequencies, except of BPF, which
remained the same.
On helicopter No.2, in the frequency range from 0 Hz to 80
Hz, have been found the following:
at frequency 3.2 Hz, RPM MR, the dominant are vertical
vibrations (Z - axis) for US) via 9. Also exist and lateral
vibrations (both X and Y axes), that are dominant for
USV 9;
at frequency 16 Hz, BPF, the dominant vibrations are in
the X - axis. This level of vibrations is occured for USV
5 and remains constant up to maximum USV angles;
at frequency 56.2 Hz, for all USV, tail blades pass
frequency is constant.
The ratio of amplitudes ARMS (RPM MR)/ARMS (BPF) at
frequencies 3.2 Hz and 16 Hz, for USV 5 is favorable and
is approximately 0.2 to 0.4 on all axes of the helicopter.
Further increase USV leads to disorder of the regular
vibration spectrum and increases the ratio of amplitudes.

Picture 6. Frequency spectrum on helicopter No.2, for


USV: a) 5 b) 7 c) 9 d) 11

106

CONTRIBUTIONTOTHEMAINTENANCEOFMi8HELICOPTERINTHESERBIANAIRFORCE

In direction of Y - axis the ratio ARMS (RPM MR)/ARMS


(BPF) has been already disrupted at USV angle 7, when its
value becomes greater than 1. The ratio ARMS (RPM
MR)/ARMS (BPF), in the direction of Z - axis, begins to be
distorted at angle of 9 USV, and at USV angle 11 this
relationship value is 1.

OTEH2016

helicopters, have developed programs for defining safety of


systems in real time, based on detection of changes in
vibration spectrums. Maintenance of helicopters in the
Serbian Air Force is done in traditional manner, without
continuous monitoring of vibration during operation.

6. CONCLUSION

The most favorable ratio of ARMS (RPM MR)/ARMS (BPF) is


held along X - axis and its value is less than 0.5 for all USV.

Based on the analysis of vibrations, it is possible to


determine the various types of failures and improper
function of component parts or aggregates. In the case of the
frequency spectrum disorder due to reduction of function of
certain working elements, the amplitudes of vibration will
be increased rotation speeds of these elements. Maintenance
of helicopters in the Serbian Air Force is done in traditional
manner, without continuous monitoring of vibration during
operation. In these circumstances, without Health and Usage
Monitoring Systems (HUMS) on helicopters, the only way
to assess condition of helicopters is a comparative analysis
of results previously measured vibrations and present
measurements.

The said ratio of amplitudes ARMS (RPM MR)/ARMS (BPF)


indicates that there is an imbalance of the main rotor, and
that one blade generates a greater lift force than the other
ones.
On this helicopter, occurrence of high levels of vibration , at
the oscillation frequency of 337.75 Hz has been spotted.
This frequency refers to the work (rotation) gear No. 10 in
the gear unit, ie. the bell of the main rotor shaft. In Picture 7
is given frequency spectrum from 0 Hz to 800 Hz measured
by an accelerometer No.5 placed on position No. 3 (the
floor of passenger compartment).

The analysis of vibrations measured on two helicopters, in


frequency range from 0 Hz to 80 Hz, it was found that
imbalance on main rotor of helicopter No.1 exists and that
one of blades generates more lift power than other blades.
Corrective activities were carried out on the main rotor,
because of this. Vibration measured after the corrective
actions, confirms that the disadvantages are eliminated, and
that the level of vibration in helicopter reduced to a
minimum at all frequencies, except at a frequency of 16 Hz,
BPF, which remained the same. In the analysis was carried
out a comparison of vibration amplitude ARMS(RPM MR)
and ARMS(BPF), at the oscillation frequency of 3.2 Hz and
16 Hz.
The similar analysis showed that on helicopter No.2, it is
necessary to perform validation geometry blades, balancing
and adjustment of main rotor blades in track condition, in
order to reduce the vibration of the main rotor. Because of
increased amplitude of vibration at a frequency of 337.75
Hz oscillation is necessary to review gear No. 10 in the gear
unit, ie. the bell of the main rotor shaft. During the
experiment were not carried out corrective actions on the
second helicopter.

Picture 7. The vibrations at rotation frequency of gear


No. 10 of the main rotor shaft helicopter No.2
In order to reduce vibration of the main rotor in direction of
Y-axis, it is necessary to perform balancing and adjustment
of main rotor blades in track condition.
The increase of vertical vibrations in the main rotor requires
validation geometry blades and adjustment of main rotor
blades in track condition.

Changes in vibration spectrums of helicopters in operational


use can not be fully defined through periodic vibration
measurements, so that need continuous monitoring
vibrations on helicopters during flights and throughout the
whole exploitation.

Because of the increased amplitude of vibration at


frequency of 337.75 Hz it is necessary to inspect gear No.
10 in the gear unit, ie. the bell of the main rotor shaft.
Removals of identified failures were not carried out during
the experiment.
This paper analyzes specified vibrations measured on two
helicopters. Vibrations were measured during the
experiment. Highlighted are possibilities which provide
monitoring of vibration to define the current status and
prediction of the future condition of the helicopter
maintenance.

6. References
[1] Nenadovi,M. Osnovi projektovanja i konstruisanja
helikoptera, Beograd, 1982. (in Serbian)
[2] Padfield,G.D.: Helicopter Flight Dynamics, Blackwell
Science Ltd. Cambridge, 1996.
[3] [Stupar,S., Simonovi,A., Jovanovi,M.: Measurement
and Analysis of Vibrations on the Helicopter Structure
in Order to Detect Defects of Operating Elements,
Scientific Technical Review, Vol.62, No.1, 2012,
pp.58-63.

Vibrations are measured during operation,in exceptional


cases, on helicopters in the Serbian Air Force. For example,
that was the case for identification the cause of a
cancellation that has already occurred, so that could not be
solved in a classic way. Modern armies for different types of

107

CONTRIBUTIONTOTHEMAINTENANCEOFMi8HELICOPTERINTHESERBIANAIRFORCE

[4] Giurgiutiu,V., Grant,L., Grabill,P., Wroblewski,D.:


Helicopter health monitoring and failure prevention
through Vibration Management Enhancement
Program, presented at 54th Meeting of the Society for
Machinery Failure Prevention Technology, Virginia
Beach, VA, May 1-4, 2000.
[5] Grabill,P., Brotherton,T., Berry,J., Grant,L.: The US
Army and National Guard Vibration Management
Enhancement Program (VMEP): Data Analysis and
Statistical, presented at the American Helicopter
Society 58th Annual Forum, Montreal, Canada, June
11-13, 2002.
[6] Ili,Z., Jovanovi,M., Jovii,S., Janji,R., Tomi,Lj.:
New opportunities for maintenance of helicopters in
the serbian air force, 19th International Conference
Dependability and Quality Management ICDQM-

OTEH2016

2016, Prijevor, Serbia, 29-30 June 206, pp 216-223. (in


Serbian)
[7] Rakonjac,V., Filipovi,Z.: Merenje vibracija i
relevantnih parametara leta transportnog helikoptera
Mi-8 sa revitalizovanim lopaticama noseeg rotora,
Vojnotehniki glasnik 6, 2004, pp.611-621. (in
Serbian)
[8] Ili,Z., Rauo,B., Jovanovi,M.: Istraivanje uticaja
nivoa vibracija na pilotskom seditu u funkciji
ispravnosti motora ispitivanjem u letu aviona Lasta,
Tehnika 67 (6), 2012, pp 951-963. (in Serbian)
[9] Datta,A.: Fundamental Understanding, Prediction and
Validation of Rotor Vibratory Loads In Steady-Level
Flight, doctoral dissertation, Faculty of the Graduate
School of the University of Maryland, 2004.

108

ON THE EFFECTIVE SHEAR MODULUS OF COMPOSITE HONEYCOMB


SANDWICH PANELS
LAMINE REBHI
University of Defence in Belgrade, Military Academy, Belgrade, Serbia, rebhi.lamine@gmail.com
MIRKO DINULOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, mdinulovic@mas.bg.ac.rs
PREDRAG ANDRI
cole Polytechnique Fdrale de Lausanne, Institute of Mechanical Engineering, Lausanne, Switzerland,
predrag.andric@epfl.ch
MARJAN DODI
University of Defence in Belgrade, Military Academy, Belgrade, Serbia, marjan.dodic@va.mod.gov.rs
BRANIMIR KRSTI
University of Defence in Belgrade, Military Academy, Belgrade, Serbia, branimir.krstic@va.mod.gov.rs

Abstract: In the present paper the effective shear modulus of composite plates with honeycomb core is determined. This
elastic coefficient represents one very important property especially in constructions subjected to torsion and combined
bending - torsion. The structures investigated in this research consisted of top and bottom plates (of different types of carbon
fibers, T300, AS4 in epoxy matrix) and honeycomb core (HexWeb engineered core). Starting from classical lamination
theory the effective shear modulus of top and bottom plates was determined for each ply in the stack up sequence. These
plies were lumped into a single composite layer for different fiber orientations and plies thicknesses. Elastic coefficients
for the HexWeb engineered core were obtained using Master and Evans relations for the equivalent properties of
honeycomb cores.
To verify this approach finite element method was used to determine the displacement, stress and strain field on composite
plates with honeycomb core. Two types of models were compared: the initial model where all the material components,
plates and the core were modeled with their intrinsic properties and lumped model with calculated effective elastic
coefficients.
It was found that the method of effective shear modulus calculation can be successfully used in situations where top and
bottom plates are symmetric or quasi isotropic in general.
Keywords: Shear modulus, Equivalent material properties, Composite material, Honeycomb sandwich panel, Finite element
analysis.

1. INTRODUCTION
Sandwich composites are widely used in aerospace
structural design, mainly for their ability to substantially
decrease weight while maintaining mechanical performance.
It has long been known, that separating two stiff materials
(facings or stress skins) with a lightweight material (core)
increases the structures stiffness and strength.
The faces carry the tensile and compressive stresses in the
sandwich. Faces can be manufactured from metallic material
(such as Al 3003, Al 5052, Al 5056), however polymer matrix
composites are being used more and more as face materials.
Composites can be tailored to fulfill a range of demands like
anisotropic mechanical properties, freedom of design, excellent
surface finish, etc. Faces also carry local pressure. When the
local pressure is high, the faces should be dimensioned for the
shear forces connected to it.

108

The cores function is to support the thin skins so they do not


buckle (deform) inwardly or outwardly, to keep them in
relative position to each other and to carry shear stresses. To
accomplish this, the core must have several important
characteristics. It has to be stiff enough to keep the distance
between the faces constant. It must also be so rigid in shear that
the faces do not slide over each other. The shear rigidity forces
the faces to cooperate with each other. If the core is weak in
shear, the faces do not cooperate and the sandwich will lose its
stiffness It is the sandwich structure as a whole that gives the
positive effects. However, the core has to fulfill the most
complex demands. Strength in different directions and low
density are not the only properties the core must have. Often
there are special demands for buckling, insulation, absorption
of moisture, aging resistance, etc. The core can be made of a
variety of materials, such as wood, aluminum (Al 3003, Al
5052, Al 5056), variety of foams (Corecell M Foam) and
polymer matrix composite materials (Figure 1).

OTEH2016

ONTHEEFFECTIVESHEARMODULUSOFCOMPOSITESANDWICHPANEL

To keep the faces and the core cooperating with each other,
the adhesive between the faces and the core must be able to
transfer the shear forces between them. The adhesive must
be able to carry shear and tensile stresses. It is hard to
specify the demands on the joints.

particles) and the floc (short fibres). These two components


are mixed in a water-based slurry and machined to a
continuous sheet. Subsequent high-temperature calendering
leads to a dense and mechanically strong paper material
(Figure 2).
During this manufacturing process, the longer floc fibres
may align themselves in direction of the paper coming off
the machine, which leads to orthotropic mechanical
structure, or the fibers may randomly distribute themselves
in the paper resulting in a 2D random composite structure (it
is assumed that the paper is manufactured to be relatively
thin, hence 2D structure results). The latter case will be
considered in this paper.
Further processing of this paper material to form hexagonal
cells is commonly carried out using the adhesive bonding
method, meaning that the bonded portion of two adjacent
paper sheets is held together by adhesive. This method
inevitably leads to double the wall thickness of the bonded
cell walls if compared to the unbonded free cell walls
(Figure 3).

Figure 1. Composite panel with honeycomb core


construction
It is of great importance to predict during the design phase
the properties of above mentioned construction. This usually
means to accurately estimate all elastic coefficients (Eij, Gij,
ij, i,j=x,y, Youngs modulii, Shear modulii and Poissons
ratios) based on sandwich panel geometry (core cell
geometry, cell wall thickness, facings thicknesses, etc.) and
panel constituents material properties.
In the present work the shear modulii (in-plane Gxy and outof-plane, Gxz and Gzy) of the panel with composite
hexagonal core and composite faces are investigated and the
model for these properties calculation is proposed.

2. NOMEX PRODUCTION
First, the manufacturing process of core will be analyzed.
The emphasis will be given to composite material cores.
Nomex is a trademarked of non-metallic paper (honeycomb
core basic building block) material, which is well known for
its excellent mechanical and other properties relevant for
aerospace applications.

Figure 3. Double wall thickness


After the paper sheets are bonded and shaped to hexagonal
cells, the resulting honeycomb block is dipped in liquid
resin (usually phenolic based) and subsequently oven cured.
This dipping-curing process is repeated until the desired
density of the core is achieved. The resulting resin coating
of the paper leads to a layered material with an orthotropic
or 2D anisotropic ductile center layer (core paper) and two
isotropic very brittle outer layers (resin layers).

3. ANALYTICAL MODEL OF SHEAR


MODULUS FOR THE HONEYCOMB CORE
In order to relate stress and strain in the structure the
constituent material properties have to be known and elastic
coefficient in the stiffness matrix have to be calculated. For
the structures investigated in this paper the determination of
the necessary elastic coefficients is very complex due to
complex geometry of the panel a furthermore complex
structure of the constituent materials i.e. core and facings.
The analyst relies on the manufacturer data or has to
perform tests if the all required values of the elastic

Figure 2. Honeycomb (hexagonal) core production


process scheme
In general, the paper is a composite that consists of two
forms of polymers, the fibrids (small fibrous binder
109

OTEH2016

ONTHEEFFECTIVESHEARMODULUSOFCOMPOSITESANDWICHPANEL

coefficients are not supplied. Secondly numerical studies


can be performed, requiring large and cumbersome finite
element models if the equivalent model approach is not
taken. This method consists of lumping several composite
layers into a single (lumped) layer with same characteristics
rendering the same stress - strain (although averaged) and
displacement fields for the same boundary conditions. It is
true that this approach still requires numerical modeling of
the structure, however, the models based on equivalent
material properties are much smaller in size (less DOFs),
hence requiring less confrontational resources, computing
time and yielding stress-strain and displacement fields much
faster compared to detailed (micro mechanics models). This
is the reason why the validation and determination of the
equivalent material properties (especially when the
composite panels with honeycomb core are in question).
The literature survey on the subject matter has revealed that
the development of equivalent composite models in the
focus of many researchers. For example, Master and Evans
model for equivalent Youngs modulii in fiber and cross
fiber directions, Ashbys model for equivalent in-plane
shear modulus are only few of several existing models.
However, all this models assume that the starting material is
isotropic. For example, in Master and Evans model one of
the required input variables is Ef which represents the
Youngs modulus of the paper. This is directly applicable
for honeycomb cores where the basic building material is
isotropic (for example: hexagonal aluminum cores), in cases
where the cores are manufactured from composite materials
(section 2), the equivalent properties of the building core
material have to be determined first before using one of the
already established and proven material equivalent material
properties.

(2)

As described in previous section, the paper core is dipped in


resin (several times until the desired density of the paper
used for honeycomb structure is achieved).

Figure 4. Core paper with added isotropic resin layers


This process adds rigidity to the paper and equations (1) and
(2) have to be modified to account for this effect. Resin
properties (Modulus of elasticity, Shear Modulus) are
known and classical lamination theory (CLT) can be applied
in this case. Also resin itself is considered to be isotropic.
Theretofore, it follows using CLT, stresses in each core
paper layer can be expressed in the following form:

If the paper material is manufactured in a such a way that


the resulting material is anisotropic. Consists of polymer
matrix and randomly distributed fibers (in a 2D plane) the
effective Youngs modulus of such a structure can be
determined based on Christensen and Waals model [1]

x Q1 Q2 0 x


y = Q2 Q1 0 y

xy 0 0 Q6 xy

Christensen and Waals examined the behavior of a composite


system with a three-dimensional random fiber orientation. Both
fiber orientation and fiber-matrix interaction effects were
considered. For low fiber volume fractions, the modulus of the
3D composite was estimated to be:
Vf
E f + [1 + (1 + m ) V f ] Em
6

E2 D
2 (1 + 2 f

In-plane shear modulus for the paper of the composite


honeycomb structure can be calculated.

3.1 Honeycomb core with 2D random fibers

E2 D =

2D =

(3)

Where,
Q1 =

Ei
E
, Q2 = i 2i , Q6 = Gi
1 i2
1 i

(4).

In equations (4), index i denotes the elastic coefficient for


the corresponding layer, for example if the central mid-layer
is in question the value of Ei is calculated according to
equation (1) and all other layers (resin) assume their
intrinsic values of E. Same applies to shear modulus and
Poisson ratio in equations (4).

(1)

Where Vf is the fiber volume fraction in the composite, m is


the matrix Poissons ratio Ef an Em are the Youngs modulii
of the fiber and matrix phase respectively. Since it can be
considered that the core composite (paper) is isotropic
(fibers are evenly distributed) the basic relation between
Youngs modulus (E), Shear modulus (G) and Poissons
ratio holds. Manera [2] proposed approximate equations to
predict the elastic properties of randomly oriented short
fiber composites. Using this approach, it can be shown that
for this type of composites (thin plates, with randomly
distributed short fibers) Poissons ratio is =1/3. Using
relation:

Expressing relation (3) in terms of in-plane resultant forces


one can obtain:
N x t / 2 x


Ny =
y dz
N xy t / 2 xy

Where t denotes thickness of the layer (Figure 4)

110

(5)

OTEH2016

ONTHEEFFECTIVESHEARMODULUSOFCOMPOSITESANDWICHPANEL

Ei1
Ei 2 i 21
, Q = Q21 =
,
1 i12 i 21 12
1 i12 i 21
Ei 2
Q22 =
, Q = Gixy
1 i12 i 21 66

For the whole lay-up it follows:


Nx

Ny =
N xy

Q11 =

x

y dz
t / 2
xy
t /2

i =1

(6)

Coefficients Qij for the mid-layer (core paper) can be


calculated using standard rule of mixture theory (ROM).
Using rule of mixtures theory shear modulus Gxy is obtained
using known relation:

In the previous relation n denotes the total number of layers


(core paper and resin layers), therefore it can be written:
Nx

Ny =
N xy

zk

[Q] { } dz

(7)
G12 =

i =1 zk 1

(8)

Matrix A is stiffness matrix and all coefficients Aij (i=1,2,6


j=1,2,6) can be easily calculated once all the properties of
all the layers are known.
t /2

Aij =

t /2

Qij( k ) dz = Qij( k ) ( zk zk 1 )

G12 =

(9)

k =1

Since the lay-up in this case can be considered to be


symmetric, it can be concluded that the effective shear
modulus for this structure can be expressed as:
eff
Gxy
=

A66
h

(13)

(V f + 12 Vm ) G f Gm
( GmV f + 22 VmG f )

(14)

Correction coefficients 12 and 22 depend on the fiber type


used in the core paper and are given for typical fibers used
in honeycomb structure production.
22

12

(10)

In equation (10) h denotes total thickness of the laminate


(core paper and resin layers).

Carbon

0.500

0.400

Glass

0.516

0.316

Aramid

0.516

0.400

Once elastic coefficients (equation 14) are computed


effective shear modulus for this type of composite (core
paper with resin layers) is calculated using equation (10).

3.2. Honeycomb core with orthotropic material


During certain manufacturing process, the floc fibers can be
aligned in the direction of the paper coming off the machine.
This leads to orthotropic mechanical properties of the paper.

3.3. Effective in-plane and out-of-plane shear


moduli of hexagonal honeycomb cores
Many authors have developed theoretical approaches for
obtaining the equivalent material properties of honeycomb
cores [3]. The nine core material properties are: two inplane Youngs moduli Ex, Ey, the out-of-plane Youngs
modulus Ez, in-plane shear modulus Gxy, out-of-plane shear
moduli Gxz, Gyz, and three Poisson ratios nxy, nxz, nyz. Using a
sensitivity analysis, Schwingshackl et al. [3] reported the
major influence of the out-of-plane shear moduli Gxz, Gyz on
displacement, stress and strain fields of the honeycomb core
type thin plates. One of the analytical approaches mentioned
in this work [3] was developed by Gibson and Ashby [4].
They described the honeycomb core material as a cellular
solid made up of an interconnected network of solid
structures which form the edges and faces of the cells. These
formulae were later slightly modified by Zhang and Ashby
[5] to include double thickness walls for the out-of-plane
values, Ez, Gxz, and Gyz.

Figure 5. Core paper (orthotropic) with added isotropic


resin layers
For this type of composite equation (3) assumes the
following form:
x Q11 Q12 0 x


y = Q21 Q22 0 y

xy 0
0 Q66 xy

G f Gm
V f Gm + Vm G f

Where Vf and Vm are fiber and matrix volume fractions, and


Gf and Gm are shear modulii of the fiber and matrix phases.
This relation, as experiments have shown, tends to
underestimate the value of G12, and semi-experimental
Halpin-Tsai theory is used instead. According to this theory
the G12 modulus can be obtained using the following
relation:

Or rewriting the previous relation,

{ N } = [ A] { }

(12)

(11)

The honeycomb cell geometry is presented in the following


figure (Figure 6) where l and h indicate the length of the
hexagon face; b indicates the height; t indicates the

Where Qij are given in the following form:


111

ONTHEEFFECTIVESHEARMODULUSOFCOMPOSITESANDWICHPANEL

thickness of the face; indicates the semi-angle between


two faces.

OTEH2016
used. Two FEA models were constructed. First mezo-scale
model, where complete structure (composite panel with
honeycomb core) was modeled using plate finite elements
(based on Kirchoffs thin plate theory). In this model each
honeycomb cell (Figure 9) was modeled using plate
elements according to geometry which corresponds to
geometry of HexWeb engineered core, with HRH 10
Nomex, with cell size h=13 [mm]. Dimensions of the
analyzed model were 250 mm x 100 mm x 25 mm. The
plate was clamped at one end, where all translations and
rotations were prevented. Displacement was measured at the
top plate, free end semi-span. The software used was
ANSYS.

Figure 6. Hexagonal honeycomb core and cell geometry


The equations (15) - (18) represent the equivalent material
out-of-plane shear moduli according to [5], and are depicted
in the following figure (Figure 7).

Figure 7. Hexagonal honeycomb core out-of-plane shear


moduli (Gxz and Gyz)

()

cos t G eff
xy
h / l + sin l

(15)

eff
t Gxy

()

(16)

2
eff
Gxzupper = 1 h / l + 2 sin t Gxy
2 ( h / l + sin ) cos l

()

(17)

eff
Gxz = Gxzupper + 0.787 ( Gxzupper Gxzlower ) Gxy
b/l

(18)

G yz =
Gxzlower =

h / l + sin

(1 + 2h / l ) cos l

Figure 9. Hexagonal honeycomb core Finite element


model
Facings (stress skins, Figure 10) where thin plates, 1 mm
thick made of carbon T-300. Composite lay-up for the
facings (lower and upper) was [0o/45o/-45o/90o].

The in-plane shear modulus is presented in the following


figure (Figure 8).

Figure 10. Hexagonal honeycomb core with facings and


refined core mesh finite element model

Figure 8. Hexagonal honeycomb core in-plane shear


modulus (Gxy)

In the second verification finite element model (model 2),


the core was modeled using solid elements, where material
properties (elastic coefficients in stiffness matrix) were
determined using the equivalent material approach, as
described in section 3.

The equivalent in-plane shear modulus for the honeycomb


composite core may be expressed by the following equation
(19).
Gxy = 0.2910

Ef t
a

(19)

Displacement field was calculated for both models and


results were compared. Boundary conditions under which
displacements of the plate tip were determined were the
same in both cases. The plate was clamped at one end (all
degrees of freedom of end nodes for both translation and
rotation are set to zero), whereas the loading moment was
applied at the plates free end. These boundary conditions

4. NUMERICAL MODEL
In order to verify the validity of the proposed model for
shear material properties of composite honeycomb panel
numerical approach, using finite elements approach was
112

OTEH2016

ONTHEEFFECTIVESHEARMODULUSOFCOMPOSITESANDWICHPANEL

applied, enforced honeycomb core and facings twist, where


shear moduli of the complete structure have an important
role and great influence on the complete structure
displacement.

and time is necessary. Many researchers have focused on


finding methodology to find equivalent material models for
different types of plates with honeycomb cores with
different cell geometries, sizes and constituent materials.

The results are shown in the following figure (Figure 11)

In this paper Shear moduli (in-plane and out-of-plane) for


the hexagon honeycomb core made of different types of
composite materials (with aligned and randomly distributed
fibers) were investigated. The equivalent material model
was presented. To verify equivalent material model validity,
the numerical approach was deployed, by creating two
different types of models. The true scale model and
lumped model where the core material of the honeycomb
was defined using equivalent material properties.
Displacement field was calculated for both models and
results compared. It was found that in the linear region both
models yield same results. Since the computing time for the
lumped model is significantly lower its application is
recommended. In non-linear region the equivalent material
model yields lower results when compared to the true-scale
model which is probably due to very complex core crushing
mechanism, and if the results are required in this region the
true-scale model has to be developed.

Figure 11. Displacement of plate tip as a function of


applied load

References

5. CONCLUSION

[1] Christensen,R.M., Waals,F.M.: Effective Stiffness of


Randomly Oriented Fiber Composites, Journal of
Composite Materials, Vol. 6, 1972, 518-532.
[2] Manera,M.: Elastic properties of randomly oriented
short fiber-glass composites, Journal of Composite
Materials, Vol. 11, 1977, p. 235-247
[3] Schwingshackl,C.W., Aglietti,G.S., Cunningham,P.R.:
Determination of honeycomb material properties,
Journal of Aerospace Engineering 19 (2006) 177183
[4] Gibson,J., Ashby,F.: Cellular Solids: Structure and
Properties. Cambridge University Press, 2001
[5] Zhang,J., Ashby,M.: The out-of-plane properties of
honeycombs, International Journal of Mechanical
Sciences 34 (1992) 475489.

In the present paper Shear properties of the honeycomb


plate were investigated. Approach based on the material
equivalent model was proposed for the honeycomb cores
made of different types of composites. In general, due to
complex geometry of hexagonal composite cores, prediction
of the elastic coefficients for these type of structures is
relatively complicated. One of the approaches is to use
numerical approach starting from constituent material
properties and creating honeycomb true scale model in order
to determine stress-strain and displacement fields. As
experiments have confirmed, this approach renders correct
results, however these (true scale) models require quite a
long development time (each honeycomb cell is modeled
independently with large number of elements and hence
large number of degrees of freedom), furthermore, in order
to solve and obtain desired fields large computing power

113

EFFICIENT COMPUTATION METHOD FOR FATIGUE LIFE


ESTIMATION OF AIRCRAFT STRUCTURAL COMPONENTS
STEVAN MAKSIMOVI
Military Technical Institute, Belgrade, Serbia, s.maksimovic@mts.rs
MIRJANA URI
Military Technical Institute, Belgrade, Serbia, mina.djuric.12@gmail.com
ZORAN VASI
Military Technical Institute, Belgrade, aig135@mts.rs
OGNJEN OGNJANOVI
Military Technical Institute, Belgrade, ognjenognjanovic@yahoo.com

Abstract: This work considers efficient computation method for total fatigue life estimation of aircraft structural
components under general load spectra. Total fatigue life of aircraft structure or structural components can be divided
in two parts. The first part represents initial fatigue life up to crack initiation in which low cyclic fatigue properties are
used and second part represents fatigue life during crack growth in which dynamic material properties are used. To
obtain efficient computation method in this work the same low cyclic material properties, for determination of initial
fatigue life and for residual life during crack growth, are used. Based on the strain energy density (SED) theory, a
fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks.
Presented computation results are shown that crack growth method based on strain energy density approach is in a
good agreement with conventional Forman`s approach. This computation method for total fatigue life prediction can be
used in practical design of metal structural parts of helicopter tail rotor blades as well as in service life extension of the
G-4 Super Galeb aircraft structures.
Keywords: aircraft structure, total fatigue life estimation, SED, low cyclic fatigue properties.

1. INTRODUCTION
Many failures of structural components occur due to
cracks initiating from the local stress concentrations.
Attachment lugs are commonly used for aircraft structural
applications as a connection between components of the
structure. In a lug-type joint the lug is connected to a fork
by a single bolt or pin. Generally the structures which
have the difficulty in applying the fail-safe design need
the damage tolerance design1,2. Methods for design
against fatigue failure are under constant improvement. In
order to optimize constructions the designer is often
forced to use the properties of the materials as efficiently
as possible. One way to improve the fatigue life
predictions may be to use relations between crack growth
rate and the stress intensity factor range. To determine
residual life of damaged structural components here are
used two crack growth methods: (1) conventional
Forman`s crack growth method and (2) crack growth
model based on the strain energy density method. The last
method uses the low cycle fatigue properties in the crack
growth model1,2. In Figure 1 and 2 are shown
representative structure of helicopter and aircraft in which
presented computation procedures can be used. Figure 1
shows the composite helicopter tail rotor blades with
metal fittings.

Figure 1. The helicopter tail rotor


In Figure 2 is shown part of fuselage structure aircraft
Super Galeb G-4.

114

OTEH2016

EFFICIENTCOMPUTATIONMETHODFORFATIGUELIFEESTIMATIONOFAIRCRAFTSTRUCTURALCOMPONENTS

estimation. Fatigue life estimation is defined using


number of cycles, Nf, when starts initial crack in critical
zone of structural element.
The complete results of fatigue life estimation up to crack
initiation of structural element is given in Table 2.

2.2. CRACK GROWTH PHASE


2.2.1 Determination of Fracture Mechanics Parameters
of Lug Structural Components
Here are considered cracked aircraft attachment lugs,
Fig.3. Once a finite element solution has been obtained,
Fig.4, the values of the stress intensity factor can be
extracted from it. For determination Stress Intensity
Factors (SIF) of cracked aircraft attachment lugs here two
approaches are used: (1) Method based on J-integral
approach and (2) Method based on extrapolation of
displacements around tip of crack.

Figure 2. Structure of Super Galeb

2. FATIGUE LIFE ESTIMATION


Total fatigue life of aircraft structural components can be
divided in two phases:
Crack initiation phase

83.3

Crack growth phase

t = 15 mm

2.1. Crack initiation phase

R20

Relation Morrow-a

44.4

(i)

160

For determination initial fatigue life of structural


components can be used various relations. Here are used
two relations: (i) Morow and (ii) Smith-Watson-Toper
relations. Both these relations are based on using low
cyclic material properties

R41.65

Relation of Morrow curve for low cyclic fatigue has the


next form [4]

F = 6371.6 daN

Figure 3: Geometry of cracked lug

= f m N b + N c
f
f
f
2
E

(1)

The difference this curve and basic curve for low cyclic
fatigue is that it takes into account the effect of mean
stress m, to modify only of the elastic component of the
total amplitude of deformation.

(ii)

Smith-Watson-Topper relation

In relation Smith-Watson-Topper-s (SWT) curve of low


cyclic fatigue [5]
SWT = max E =
2
=

( ) ( N f )
'
f

2b

+ E
'
f

'
f

(N f )

b+c

Figure. 4. Finite Element Model of cracked lug with


Von- Misses stress distribution

(2)

The path-independent J-line integral which was proposed


by Rice [11] is defined as

The effect of the mean stresses is included using the next


relation

max = m +
2

J =

(3)

W d x

Ti

ui
d s
x1

(4)

where W is the elastic strain energy density, is any


contour about the crack tip, Ti and ui are the traction and
displacement components along the contour and s is archlength along the contour, x1 and x2 are the local
coordinates such that x1 is along the crack.

Mark SWT in (2) refers to Smith-Watson-Topper-s


parameter. SWT parameter is used to include the effects
of the mean stresses in fatigue life estimations. It means
that SWT parameter includes the effects of the mean
stress m on fatigue life estimation in fatigue life
115

OTEH2016

EFFICIENTCOMPUTATIONMETHODFORFATIGUELIFEESTIMATIONOFAIRCRAFTSTRUCTURALCOMPONENTS

purpose conventional Forman crack growth model and


crack growth model based on strain energy density
method are used. Material of lugs is Aluminum alloy
7075 T7351 with the next material properties: m=432
N/mm2 Tensile strength of material, 02=334 N/mm2,
KIC=2225 [N/mm3/2], Dynamic material properties
(Forman`s constants): C=3* 10-7, n=2.39; Low cycle
fatigue material properties of material are: f =613 P,

2.2.2. Crack growth models


Conventional Forman`s crack growth model is defined in
the form3
C ( K )
da =
dN (1 R ) K C K
n

(5)

f =0.35, n =0.121.

where: a is crack length, N is number of cycle, KC is the


fracture toughness C, n are experimentally derived
material parameters. The strain energy density method
(SED) can be written as [6-10]
da = (1 n ) ( K K )2
I
th
dN 4 EI n f f

2.2.3.2. Determination of stress intensity factors


Here are determined stress intensity factors of cracked
lug, defined in Figures 3 and 4 using different
computation methods. The stress intensity factors (SIF`s)
of cracked lugs are determined for nominal stress levels:
g = max=98.1 N/mm2 and min=9.81 N/mm2. The
corresponding forces of lugs are defined as, Fmax= g (w2R) t = 63716 N and Fmin= 6371.16 N, that are loaded of
lugs. For stress analysis of cracked lug in this paper
contact pin/lug finite element model is used. Results of
SIF`s are given in Table 1. Good agreement between FE
and analytic results is evident. Two crack growth models
of cracked lug using relations (5) and (6) are shown in
Fig.5.

(6)

where: f is cyclic yield strength and f - fatigue


ductility coefficient, KI is the range of stress intensity
factor, - constant depending on the strain hardening
exponent n , I n - the non-dimensional parameter
depending on n . Kth is the range of threshold stress
intensity factor and is function of stress ratio i.e.

K th = K th 0 (1 R )

(7)

Kth0 is the range of threshold stress intensity factor for


the stress ratio R = 0 and is coefficient (usually,
= 0.71).
2.2.3. Numerical validations
Here are considered two crack growth models: (i) cracked
aircraft lug and (ii) Cracked plate with circular hole and
initial crack.
2.2.3.1. Crack growth model of aircraft lug

Figure 5. Comparisons crack growth behavior using SED


and Forman`s methods

Subject of this analyses are cracked aircraft lugs under


cyclic load of constant amplitude and spectra. For that

Table 1. SIFs of cracked lugs using J-integral approach and Displacement Extrapolations

Path of -integral on
locations w

Crack Length

J-integral approach

Method of Displacements Extrapolation

= 5mm

J-integral

KI

KI

KII

KIII

2D

0.581

65.582

69.592

5.7803

0.000

Path 1 w=0.0 mm

3D

0.609

67.136

60.439

10.003

5.967

Path 2 w=4.75 mm

3D

0.637

68.669

66.090

8.221

2.099

Path 3 w=7.50 mm

3D

0.640

68.809

66.180

8.234

0.000

Analytic solution6: I = 65.621


Results of FEM6: I = 68.784

116

EFFICIENTCOMPUTATIONMETHODFORFATIGUELIFEESTIMATIONOFAIRCRAFTSTRUCTURALCOMPONENTS

OTEH2016

Table 2. Fatigue life estimation of structural components using in-house software up to crack initiation (plate with
central hole under load spectrum)

2.2.3.3 Crack growth model of plate with hole and initial


crack under load spectrum

complete results of residual life estimation of cracked


plate with circular hole and one crack under load
spectrum are given in Table 3.

The second example of crack growth model is plate with


circular hole and one crack subjected load spectrum. The

Table 3. Fatigue life estimation of structural components using in-house software during crack growth (plate with
central hole and initial crack under load spectrum)

117

EFFICIENTCOMPUTATIONMETHODFORFATIGUELIFEESTIMATIONOFAIRCRAFTSTRUCTURALCOMPONENTS

OTEH2016

[2] Sehitoglu,H., Gall,K., Garcia,A.M.: (1996), Recent


Advances in Fatigue Crack Growth Modelling, Int. J.
Fract., 80, pp.165-192.
[3] Forman,R.G., Kearney,V.E., Engle,R.M.: Numerical
analysis of crack propagation in cyclic loaded
structures, J. Bas. Engng. Trans. ASME 89, 459,
1967.
[4] Morrow,J.D.: Fatigue properties of metals. In:
Fatigue design handbook Sec. 3.2. SAE advances in
engineering, Warrendale, PA; 1968. p.219.
[5] Smith,KN, Watson,P, Topper,TH.: A stressstrain
functions for the fatigue of metals. J Mater
1970;5:76778.
[6] Maksimovic,K.: Strength and Residual Life
Estimation of Structural Components Under General
Load Spectrum, Doctoral Thesis, Faculty of
Mechanical Engineering, 2009 (in Serbian).
[7] Maksimovi,S.,
Vasovi,I.,
Maksimovi,M.,
uri,M.: Residual life estimation of damaged
structural components using low-cycle fatigue
properties, Third Serbian Congress Theoretical and
Applied Mechanics, Vlasina Lake, 5-8 July 2011, pp.
605-617, Organized: Serbian Society of Mechanics,
ISBN
978-86-909973-3-6,
COBISS:SR-ID
187662860, 531/534(082).
[8] Maksimovi,S.,
Kozi,M.,
Stetic-Kozi,S.,
Maksimovic,K.,
Vasovic,I.,
Maksimovic,M.:
Determination of Load Distributions on Main
Helicopter Rotor Blades and Strength Analysis of the
Structural Components, Journal of Aerospace
Engineering, Vol.27, Number 6, November /
December 2014.
[9] Boljanovi,S., Maksimovi,S.: Fatigue crack growth
modeling of attachment lugs, International Journal of
Fatigue, International Journal of Fatigue 58 (2014),
66-74.
[10] Maksimovic,S., Doi,R., Vasi,Z.: Application NDI
Methods in Process of Extension of Operation Life of
Aircraft Structures, OTEH 2014.
[11] Rice,J.R.: Elastic fracture mechanics concepts for
interfacial
cracks.
Journal
of
Applied
Mechanics1988; 55:98-103.

The complete computation results of plate with central


hole made from dural 2024 T4 are given in tables 2 and 3.
Computation results are in good agreement with
experiments. Total number of blocks up to crack initiation
is 38.4 (Table 2) and experimental 45. Total number of
blocks during crack growth is 5 (Table 3) and
experimental is 5. Here is evident good agreement
computation with experimental results.

3. CONCLUSIONS
This investigation is focused on developing efficient and
reliable computation methods for total fatigue life
estimation. In this investigation are presented two
computation methods for the fatigue life estimation: (i)
Fatigue life estimation up to crack initiation using low
cyclic material properties and (ii) Fatigue life estimation
of structural elements during crack growth using low
cyclic material properties too. Both computation methods
are based on using the low cyclic material properties.
Computation results are compared with own experimental
results. Good agreement between computation with
experiments is obtained in domains crack initiation and
crack growth phases.
A special attention has been focused on determination of
fracture mechanics parameters of structural components
such as stress intensity factors of aircraft cracked lugs.
Computation prediction investigations for fatigue life
estimation of an attachment lug under load spectrum were
performed. In this investigation the SED crack growth
model based on low cycle fatigue material properties is
included. Comparisons of the predicted crack growth rate
using strain energy density (SED) method with
conventional Forman`s model points out the fact that this
model could be effective used for residual life
estimations.

References
[1] Maksimovic,S.,
Posavljak,S.,
Maksimovic,K.,
Nikolic-Stanojevic,V., Djurkovic,V.: Total Fatigue
Life Estimation of Notched Structural Components
Using Low-Cycle Fatigue Properties, J. Strain
(2011),
47
(suppl.2),
pp.341-349,
DOI:
10.1111/j.1475-1305.2010.00775.x

118

ANALYSIS OF AIRCRAFT STRUCTURES CROSS SECTION


BOGDAN S. BOGDANOVI
UTVA AI, Panevo, bogdanovic00@gmail.com
DARIO A. SINOBAD
UTVA AI, Panevo, 318dsinobad@gmail.com
TONKO A. MIHOVILOVI
UTVA AI, Panevo, tonkojetonko@gmail.com

Abstract: Definition of normal and shear stresses produced by bending and twist in closed semi monocoque
structures, as structures mostly used in aircraft structures is considered in this paper. Calculation is realized using well
known theoretical background such as fundamental beam theory, shear flow definition in statically indeterminate
structure (single redundancy). Buckling behavior of thin sheet is also taken into account. As results effective cross
section is defined and normal stresses in longitudinal parts, stringers, and shear stresses in skin, thin sheets, between
stringers, are obtained. Complete procedure for the whole calculation is formed, and some results are shown.
Keywords: effective width, normal stresses, shear stresses, shear center.

1. INTRODUCTION

2. STRESSES DEFINITION FOR STRINGER


SKIN CONSTRUCTION

General load acting on aircraft as a whole comprises air


loads, inertia loads, power plant loads, landing gear loads
etc. All kinds of load are distributed on parts of aircraft
structure as:
forces (axial, transverse), and

The cross section of some main part of structure such as


wing, fuselage or tail units will be considered, for
obtaining the whole procedure. These cross sections are
composed of:

moments (bending, torsion).

stringers (longerons), and


skin.

Every part of structures is exposed to some component


part of general loads acting on aircraft, and they all
together should keep the whole aircraft in equilibrium.
Mostly used shape of parts in primary, and also in
secondary, structures of modern aircrafts is closed semi
monocoque type, which could be described as reinforced
shell, or tube. Basic elements of this type are:

The procedure, itself, is separated in two parts; it is


definition of:
normal stresses, and
shear stresses.

stringers or longerons, beams

2.1. Necessary data

frames or bulkheads, ribs, and

Before the procedure starts some set of necessary data


must be known. This set comprises:

skin (sheet).

defined components of loads acting on element cross


section

Stringer (longeron) - skin construction makes outer shape


of structural part, but it must be supported by frames (in
fuselage), or ribs (in wing). All of these elements must be
well designed to take and transfer part of load that acts on
them; it means that stresses in elements, produced by
loads, should be below the allowable defined for
elements' shape and material. A stresses definition
procedure for such constructions and obtained results will
be shown in this paper. The procedure will cover only the
linear range, but it is very convenient to be arranged for
nonlinear range of material behavior.

defined all geometrically data


Components of loads acting on structural element cross
section are (see Picture 1):
bending moments about any pair of mutually
orthogonal axes of airplane coordinate system, lying in
the cross section plane My, Mz,
twisting moment about third axis of airplane coordinate
system normal to the cross section Mx, and

119

OTEH2016

ANALYSISOFAIRCRAFTSTRUCTURESCROSSSECTION

transverse forces along direction of the same mutually


orthogonal axes, lying in the cross section plane Fy, Fz

In this discontinuous structure all cross section area is


concentrated on stringer positions, without any material
between them. Every approximation consists of several
steps:
effective skin, sheet, width and area definition,
calculation of total area on stringer position,
definition of new C.G. position and corresponding
stringer coordinate referring to new C.G. position
inertia calculation, corresponding to new C.G. position,
normal stress calculation,
final effective width calculation
a. Effective width and area definition
Effective skin area is defined as parts of skin situated fore
and aft of every stringer, and it is obtained by multiplying
effective width with corresponding skin, sheet, thickness.
It is supposed that normal stress value in whole effective
skin area and the stringer is the same. First of all
effective width will be defined. There are two ways how
effective width can be defined, depending on stringer
normal stress type:

Picture 1. Cross section simplified loads and geometry

normal stress is tension, and

Geometrical data are:

normal stress is compression.

coordinate of stringer positions usually referring to the


airplane coordinate system axes situated in cross section
plane (Y,Z),

a1. Normal stress is tension

stringer cross section area

In this case whole skin, length between two successive


stringers is effective. The effective width which belongs
to any of them is one half of total length; it means:

skin length between two stringers (b),


skin depth thickness, (t), and
stringer efficiency coefficient (k)

weffi = weffi +1 =

2.2. Normal stresses calculation procedure

bi ,i +1
2

(1)

It should be noted that the same expression is used in


defining effective width on the beginning of the first
approximation for every stringer.

Procedure for normal stress definition is iterative; it


means that there are several approximations of same type.
Minimum number of approximations is two. Each of them
starts, begins, using corresponding geometrical and load
data, and as final results:

a2. Normal stress is compression


In this case effective width length is calculated using
expression:

new C.G. position


inertia values about mutually orthogonal axes
skin, sheet, effective width, and

weff ,i = C t

stringer normal stresses


are obtained. Then, these results are compared with the
same kind results of previous approximation. If the
difference is bigger than some given value the next step is
necessary, if not the procedure is stopped. Results that are
compared are:

st ,i

(2)

This is effective width on fore or aft side of stringer. Here


are:

C.G. position, and

C[ ]

coefficient, usually its value is 0.85

t [cm]

skin thickness

E [bar] Youngs modulus, and

maximum stringer normal stress value

[bar] normal stress value in stringer

2.2.1. Approximation details

b. Calculation of total area on stringer position

First of all it should be noted that cross section structure


made of stringer and continuous skin area is, for
calculation purpose, substituted by a discontinuous one.

Total area on stringer position is composed of two parts:


As [cm2] stringer area, and
120

OTEH2016

ANALYSISOFAIRCRAFTSTRUCTURESCROSSSECTION

Aw [cm2] effective width area

e. Normal stress calculation

It means:
At = AS + AW

The fundamental beam theory is used as a basis for


normal stress calculation. In application this theory is
little bit modified using so called K method as very
convenient for practical use. This method enables normal
stress calculation using data referred to any pair of
mutually orthogonal axes; data already defined. Used
expression is:

(3.1)

In this calculation the stringer coefficient of efficiency


(K) should be taken into account. This coefficient defines
part of total stringer area which is included in bending
moments transferring. Near cutouts in structure this
coefficient value is zero, on certain distance far from
opening, is one. Finally, expression for calculation of total
area on stringer position is:
At = K ( AS + AW )

b = ( K3 M z K1 M y ) yi
( K 2 M y K1 M z ) zi

(3.2)

Coefficient values K1, K2, K3 are obtained by expressions:

K1 =

c. New C.G. position


c1 C.G. position

K2 =

Coordinates of cross section C.G. position, on the


beginning, referring to the airplane coordinate system, are
defined using well known expressions:
YC .G. =

Ai yi

, Z C .G. =

Ai zi

K3 =

I yz
2
I y I z I yz

Iz
,
2
I y I z I yz

(8)

Iy
2
I y I z I yz

(4)

f. Final effective width calculation


At the end of each approximation final effective width is
calculated, using expression (2). Obtained final effective
width values are compared with the corresponding width
values from the previous approximation, and the smaller
one is used in further calculation as input data for next
approximation; that is:

Here are:
yi [cm], zi [cm] stringer coordinate along airplane
coordinate system axes
Calculation of new C.G. position, using (4) in every
iteration gives the difference between C.G. coordinates of
two successive approximations as a results.

weff , i +1 < weff , i weff , i +1

c2. Corresponding stringer coordinate referring to new


C.G. position

weff , i +1 > weff , i weff , i

Having defined new C.G. position, coordinate system is


just translated from previous C.G. position to new one.
Stringer coordinate in new coordinate system are obtained
as:
yi = yi 1 YC .G. , zi = zi 1 Z C .G.

(7)

g. Normal stress final values


These values are obtained from the last approximation,
when the calculation is stopped.

(5)

2.3. Shear stresses calculation procedure


d. Inertia calculation

There are two steps in shear stresses calculation


procedure:

Inertia values are calculated for pair of mutually


orthogonal axes passing through cross section centroid
using well known expressions set:
I y [cm ] =

( z Z

I z [cm 4 ] =

( y Y

I yz [cm 4 ] =

( y Y
i

C .G .

shear flow values definition, and


dividing shear flow by corresponding thickness and
obtaining shear stress.

) Ai
2

C .G .

C .G .

)2 Ai ,

Total shear flow acting on cross section skin is generally


produced by two load components:

(6)

part of flow produced by transverse force acting along


axes of airplane coordinate situated in cross section
plane (Y,Z axes), and

) ( zi Z C .G. ) Ai

part of flow produced by twisting moment (pure


torsion) acting about some cross section point, usually
origin of coordinate system for the first approximation
121

OTEH2016

ANALYSISOFAIRCRAFTSTRUCTURESCROSSSECTION

2.3.1. Shear flow produced by cross section transverse


forces

y A
F K F ) z A

qx = ( K 3 Fy K1 Fz )
( K2

(11)

Considered cross section is closed contour shape. It


means that shear flow calculation, is statically
indeterminate problem to the first degree. The problem
will be solved assuming that all transverse force pass
through cross section shear center. It means that:

Fy [daN], Fz [daN] force values along airplane


coordinate system axes

there is not any cross section angular deformation


(twist), and

b. Static indeterminate shear flow definition

Here are:

sum of moments, produced by all cross section shear


flows, for any cross section point must be zero

Static indeterminate shear flow has constant value around


whole cross section circumference. In this procedure, the
calculation of static indeterminate shear flow value is
realized using condition (9). In expanded form angular
deformation (twist) could be written:

The expression which describes first condition is:


=0

(9)

Here is:

1
2 AG

q ds
t

(12.1)

1
2 AG

q t b

(12.2)

rad angular twist of cross section


cm

Second condition is described using expression:

=0

(10.1)

Here are:

q [daN/cm]

shear flow value

A [cm2]

cross section area

G [bar]

material elastic shear modulus

ds [cm]

infinitesimal part of cross section


circumference

Mq [daNcm] total sum of moments produced by


cross section shear flows

b [cm]

Mqi [daNcm] sum of moments produced by cross


section indeterminate shear flows

part of skin, sheet, width between


two successive stringers

t [cm]

corresponding thickness of skin part

or, in expanded form:

M qi

M qd

=0

(10.2)

Here are:

Using (9) expression (12.2) became:

Mqd [daNcm] sum of moments produced by cross


section determinate shear flows

1
2 AG

Usually, force components dont pass through cross


section shear center, so in that case the condition (10.1)
will be not satisfied and some twisting moment could
exist. If it happens, shear flow produced by this moment
should be added to shear flows produced by force
components, for obtaining final shear flow values.
Problem solution consists of several parts (steps):

q t b = 0

Sum on the left side could be spliced in two:


1
2 AG

q t b + 2 A1 G q t b = 0
d

(13)

Here are:

static determinate shear flow definition,

qd [daN/cm] values of static determinate shear flow

static indeterminate shear flow definition,


total shear flow calculation,

qi [daN/cm] values of static indeterminate shear flow

calculation of moment produced by total shear force


components flow, and

By rearranging (13) it will be obtained:

shear center position definition


a. Static determinate shear flow definition

qd b
+ qi
t

bt = 0

* note: qi is constant value, so it can be put out of


summation sign.

For calculation purpose closed cross section contour is


cut on chosen place and transferred into determinate
structure. Shear flow in such structure will be calculated
using again K method. Following equation will be
used:

Finally, expression for indeterminate shear flow value


calculation is:

122

OTEH2016

ANALYSISOFAIRCRAFTSTRUCTURESCROSSSECTION

q b

t
=
b
t

(14)

+ Fy zsc = 0 ,

zsc =

Total shear flow values on any skin part are obtained


simply by adding corresponding values qi and qd:
d

q = q +q

O
qFz

+ Fz ysc = 0

Shear center coordinates are:

c. Total shear flow calculation

O
qFy

M
Fy

O
qFz

Fz

(19)

Expression for shear flow value calculation produced by


cross section torsion is:

d. Calculation of moment produced by total shear flow


from force components

qMt =

Moment produced by total shear flow force components


for chosen point (O) is defined by expression:
O
q

, ysc =

2.3.2. Shear flow produced by cross section torsion

(15)

M = q A

O
qFy

(20)

2 A

Here is:

M [daNcm] = M + M

(16)

total cross section

twisting moment - torsion

Here are:

A [cm2] total cross section area

A [cm ] part of cross section area, corresponding to


the shear flow value between two successive
stringers

3. NUMERICAL EXAMLE
As an example, normal and shear stresses in airplane
fuselage cross section are calculated. Obtained results are
shown in Tables 1, 2 and 3. Procedure is realized using
EXCEL software.

If:

O
q

(17)

Table 1. Stringer normal stress values

obtained value should be added to the already known


torsion moment value (input data Mx).

Str.

e. Shear center position

[bar]

CL

Total shear flow values produced by transverse force


components are obtained under assumption that these
force components pass through shear center. Usually, it
isnt a case and force components are situated to some
other point (O) and because of this result (17) is obtained.
Mostly, this point (O) is origin of airplane coordinate
system axes in the cross section plane.

86.61

76.50

25.97

4.09

-67.51

-87.73

Value:

-87.73

-67.51

4.09

25.97

10

76.50

86.61

O
q

could be canceled only by transferring force components


from point (O) to shear center. In this case should be
written:

M = M
O
q

O
qFy

O
qFz

CL

(18)

Table 2. Shear flow sums and shear stress values in skin


sections between stringers
Str
q

[daNcm]
[bar]

Here are:
MqFyO, MqFzO [daN*cm] total shear flow values
corresponding to the force components Fy, Fz
Both sums on the right side of (18) must be canceled
separately, and it gives:

CL-0
0-1
1-2
123

-2.7E-15
8.912871
16.0749

-3.4E-14
178.2574
321.4979

OTEH2016

ANALYSISOFAIRCRAFTSTRUCTURESCROSSSECTION

2-3
3-4
4-5
5-6
6-7
7-8
8-9
9-10
10-0
0-CL
Control

18.02914
18.66437
11.94374
-2.7E-15
-11.9437
-18.6644
-18.0291
-16.0749
-8.91287
-2.7E-15
-1.3E-12

Table 3. Shear center position

360.5828
373.2874
238.8749
-5.5E-14
-238.875
-373.287
-360.583
-321.498
-178.257
-3.4E-14

Ye.c.
[cm]
-1.15576E-15

Z e.c.
[cm]
0

4. CONCLUSION
The procedure presented in this paper is a very
convenient, easy and quick way for obtaining a lot of
reliable results. These results are very useful as in global,
so in detail, stress analysis of airplane structure parts.

References
[1] Bruhn,E.F.: Analysis and design of flight vehicle
structures, Tri state offset company, Ohio 1973

124

STRESS CALCULATION OF NOSE GEAR SUPPORT WITH ASPECT OF


WELDING OF AEROSPACE STEEL 15CRMOV6
ALEKSANDAR PETROVI
Utva avio industrija d.o.o. pealeks@gmail.com
BOGDAN S. BOGDANOVI
Utva avio industrija D.O.O. bogdanovic00@gmail.com
ALEKSANDAR STANAEV
Utva avio industrija D.O.O. aleksandarstanacev@outlook.com

Abstract: Nose gear support stress calculation of airplane SOVA and welding specification definition of welded
connections are considered in this paper. During design of new plane SOVA aroused need for modification of Utva75 nose gear support. Lack of information regarding welding 15CrMoV6 steel made us create welding procedure
specification and to check them experimentally.
Calculation of stress is realized with FEM analysis. Results are used for welded connections calculation and for
creating proper welding procedure specification.
Welding analysis is done experimentally and numerically. Experiment is done on standard specimens. Specimens are
made from aerospace steel 15CrMoV6 (1.7734). Result are compared, and shown in paper.
Keywords: Nose gear support, Welding, TIG, 15CrMoV6, FEM.
Filler materials for the 1.7734 steel are 8CrMo12
(8CD12) or steel 15CDV6 (1.7734.2). Procedure when
filler material has degrading characteristics than base
material is called under-matching. Additional information
for welding these types of steels can be found in [1].

1. INTRODUCTION
The design and certification of aircraft SOVA, derived
from the basic aircraft U - 75 have made it necessary to
define the stress state in the nose gear support (NGS).
While doing so, we take into account the data related to
the exploitation of aircraft U - 75, such as the occurrence
of cracks on the NGS, and implemented, modification.

Table 1. Chemical components of 1.7734 steel


Element
Min %
Max %
C
0,12
0,18
Si
0,2
Mn
0,8
1,1
S
0,015
P
0,02
Cr
1,25
1,5
Mo
0,8
1
V
0,2
0,3

Production of this assembly is done by welding and due


to the specific production it is necessary to detail the
technological process and determine the appropriate
welding parameters. For this purpose it was carried out:
Stress analysis of NGS;
Experiment, identifying the best possible welding
parameters and materials.
Stress analysis is done using finite element analysis
(FEM).

In order to optimize production, testing of welded


specimen of steel 1.7734 in condition 1.7734.4 and
1.7734.5 were carried out. Mechanical characteristics of
steel in specified conditions were given in Table 2, further
information can be found in [4].

During the experiment data that was varied are:


State of material
Post weld heat treatment (PWHT) of welded joints

Table 2. Mechanical properties of steel 1.7734.4 and


1.7734.5
Properties
1.7734.4
1.7734.5
0.2% Proof Stress [N/mm2]
550
790
Tensile Strength [N/mm2]
700
980-1180
Elongation [%]
12
10
Hardness
[205 HV] 29 [HRC]

The steel used for NGS production is aviation steel


15CrMoV6 (1.7734). This steel is low alloy, low carbon
steel with excellent tensile strength, hardness and good
weldability. The basic purpose of this steel is
manufacturing of pressure vessels, rocket motor, and the
missile with solid fuel, auto industry, railway industry and
aviation. Chemical characteristics of this steel are given in
Table 1.

125

CALCULATIONOFNOSEGEARSUPPORTWITHASPECTOFWELDINGAEROSPACESTEEL15CrMoV6

OTEH2016

In pictures 4 to 6, stress values on welded joints between


planed reinforcement and existing (basic) NGS structure
are shown.

2. STRESS ANALYSIS
Calculation of NGS is done in accordance with the
requirements of [5] and the corresponding input values
are shown in the Table 3.
Table 3. The limit force components
For aft loads

For forward
For side loads
loads

Z+ comp.

5051 N

5051 N

5051 N

X+ comp.

4041 N

X- comp.

2021 N

Y comp.

3536 N

Stress Analysis was carried out by FEM. To eliminate the


error induced by density of mesh, several mesh sizes were
employed, until the convergence of stresses and
displacements on characteristic points was achieved [2].

Picture3. Stress state in zone of previous crack


occurrence of cracks for side loads

In Pictures 1 to 3 obtained results for defined load in the


zone of previous crack occurrence in the UTVA-75
plane for several loads condition are shown.

Picture4. Stress state in zone of welded joints for aft


loads

Picture1. Stress state in zone of previous crack


occurrence of cracks for aft loads

Picture2. Stress state in zone of previous crack


occurrence of cracks for forward loads

Picture5. Stress state in zone of welded joints for forward


loads

It was observed that the maximum Von-Misses stress


value, which generally does not exceed the ultimate
values for a given material, occur in zones where cracking
occurred on unmodified NGS. It can be concluded that the
additional reinforcement should completely eliminate the
occurrence of zones with high stresses

Results show that maximum stress is induced at welded


joint for side loads. Normal stress values in that zone will
be used when calculating welded joints.

126

OTEH2016

CALCULATIONOFNOSEGEARSUPPORTWITHASPECTOFWELDINGAEROSPACESTEEL15CrMoV6

Welding process 141 (TIG) defines procedure of joining


metals by melting basic and filler metal using heat. Heat
is achieved with electric arc between Tungsten electrode
and base metal. Filler metal in wire form is immersed in a
liquid bath and due to the high heat melts and mixes with
the base material. Inert gas used for welding procedure is
Argon. Welded plates after PWHT are shown in Picture 8.
Prescribed parameters from
specification are given in Table 5.

welding

Table 5. Welding parameters


Filler material
Filler diameter [mm]
Current [A]
Voltage [V]
AC/DC current
Welding speed [cm/min]
Gas flow [l/min]

Picture6. Stress state in zone of welded joints for side


loads

3. EXPERIMENT DETAILS

procedure

1.7734.2
1,6
50-60
12
DC3
9,5

The experiment takes into account the effect of PWHT on


the state of the weld joints and the base material. The butt
weld joint using 1.2 mm thick sheet, which is most often
used in this assembly, is examined. Standard specimens
were used, in accordance with [3]. Specimens are
obtained from the welded plates. Groove weld is designed
with a gap of 1mm and welding sequence was done with
one pass.
Following tests were carried out:
Tensile tests
Hardness tests
Overview of specimen used for obtaining mechanical
properties of welded joints is given in Table 4.
Table 4. Specimens and configuration for testing
MATERIAL

PWHT

1.7734.4
1.7734.4
1.7734.5
1.7734.5

600C/1.5h
580C/4h
600C/1.5h
580C/4h

Picture8. Welded plates

No. of
No of
specimens for specimens for
tensile tests hardness tests
3
2
3
2
5
2
5
2

Specimens are made of welded plates, cut and milled to


dimensions in accordance with the [3]. Picture 9 shows a
form of specimens made for tensile test. On picture 10
dimensions of specimens are shown.

Plates that are used for specimen production are shown in


Picture 7.

Picture9. Specimens for tensile testing

Picture7. Groove before welding

127

CALCULATIONOFNOSEGEARSUPPORTWITHASPECTOFWELDINGAEROSPACESTEEL15CrMoV6

OTEH2016

Picture11. Hardness for specimens of steel 1.7734.4


Changing the parameters of PWHT from 580C/4h to
600C/1h results in lower measured hardness values:
4,3% for weld metal

Picture10. Specimen dimension

0,6% for HAZ

4. RESULTS

Data measured for steel 1.7734.5 steel specimens yields


2.8% hardness drop in the weld seam and 10% hardness
drop in the HAZ for the same PWHT parameter change.

Hardness testing is done by Rockwell method.


Measurements are performed in zones of base metal, weld
seam and heat affected zone (HAZ). Data of hardness
testing for butt joint are given in Table 6.

Specimens
number

Base metal

Weld seam

HAZ

1.7734.5
580C/4h

1.7734.5
600C/1,5h

1.7734.4
580C/4h

1.7734.4
600C/1,5h

Specimen
configuration

Table 6. Values of hardness testing

1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5

28
28
28
28
29
28
28
29
30
30
30
29
29
28
28
32
33
32
32
33

41
41
41
40
36
36
44
43
41
44
35
35,5
35,5
37
36
35
38
38,5
35
38

34
37
37
35
33
40
34
34
36
33
28
28
29
28
29
29
32
32
31,5
33

Picture12. Hardness for specimens of steel 1.7734.5


Tensile testing of 1.7734.4 steel specimens was done on
the machine SHIMADZU at the Military Technical
Institute and in Utva-AI for steel 1.7734.5 specimens.
Test is shown in picture 13, and specimens after testing
are shown in picture 14 and 15.

Data from Table 5 is presented in picture 11 for WH


1.7734.4 steel and in picture 12 for WH 1.7734.5 steel.
From Picture 11 it can be concluded that the hardness of
the steel 1.7734.4 is highest in the zone of weld seam, as
expected.

Picture13. Tensile testing machine SHIMADZU

128

CALCULATIONOFNOSEGEARSUPPORTWITHASPECTOFWELDINGAEROSPACESTEEL15CrMoV6

OTEH2016

Picture14. Specimens of steel 17734.4

Picture15. Specimens of steel 17734.5

The results of the tensile tests are given in table 7 for steel
1.7734.4 and in table 8 for steel 1.7734.5. The test was
performed on three specimens for steel 1.7734.4, and on
five specimens for steel 1.7734.5. Tensile strength is
defined by the expression:

Data from table 7 in the form of graphs are presented in


picture 16 and picture 17. Data from table 8 are presented
in picture 18. From graphs a comparative overview of
observed values for different PWHT can be seen.
Changes in PWHT from 580C/4h to 600C/1h of
specimens made of steel 1.7734.4 shows increase of
tensile strength by 1.25% on the place of brake and
increase of tensile strength by 12.5 % on the weld seam.

F
RM = M
a b

Where:
RM [N] maximal tensile force
a [mm] specimen thickness
b [mm] specimen width

Table 7. Values of tensile testing for steel 1.7734.4


Material /
PWHT
1.7734.4
600C/1,5h
1.7734.4
580C/4h

Specimens
number

Maximum force
[KN)

1
2
3
1
2
3

13,2551
13,6046
13,2761
12,336
12,6201
12,3069

Place of brake
Tensile
Yield
strength
strength
[MPa]
[MPa]
804
864
826
890
823
888
804
781
826
804
823
846

Weld seam
Tensile
Yield
strength
strength
[MPa]
[MPa]
515
479
644
598
641
595
516
476
547
502
511
489

Table 8. Values of tensile testing for steel 1.7734.5


Material /
PWHT

1.7734.5
600C/1,5h

1.7734.5
580C/4h

Specimens
number

Maximum force
[KN]

1
2
3
4
5
1
2
3
4
5

15
14,6
15,3
14,8
15,2
15,6
15,3
15,4
14,9
14,9

Place of brake
Tensile
Yield
strength
strength
[MPa]
[MPa]
979

972

997

978

992

984

974

971

964

942

129

Weld seam
Tensile
Yield
strength
strength
[MPa]
[MPa]
564

547

671

544

652

614

591

655

514

568

CALCULATIONOFNOSEGEARSUPPORTWITHASPECTOFWELDINGAEROSPACESTEEL15CrMoV6

OTEH2016

5. CONCLUSION
The paper presents the calculation of stress in the
structure and welded joints of NGS, with load condition
defined in [5]. Stress calculation was made by FEM.
Results of the analysis of NGS show the need for further
modification in order to reduce stress in critical areas of
the assembly. It was concluded that additional stiffening
can prevent the occurrence of a local high stress
concentration.
Another aspect of paper relates to the testing of welded
joints on standard specimens in order to determine the
best welding procedure in terms of strength of materials
and production optimization.

.
Picture16. Tensile strength for specimens of steel
17734.4

Standard specimens of steel 1.7734 were tested. During


testing changing of parameters of PWHT were conducted
in order to verify mechanical characteristics of welded
joints and to check possibility for production
optimization.

Parameter change of PWHT for specimens made of steel


1.7734.5 gives increase of tensile strength by 1,7% on the
place of brake and increase of tensile strength by 1.2 % on
the weld .
The measured yield strength of specimens made of steel
1.7734.4 presented in picture 17 shows that the change in
the PWHT from 580C/4h to 600C/1h increases yield
strength by 0.3% on place of break, and by 4.3% on weld
seam.

It is shown that changing the parameters of PHWT from


580C/4h to 600C/1h should not weaken welded joints
on standard specimens. From point of production
optimization it is shown that NGS could be manufactured
of steel 1.7734.5 with parameters of PWHT defined as
600C/1,5h, which would cheapen the production and
shorten the time of manufacture.
Presented results are consistent but because of small
number of specimen the results should be considered as
preliminary.
Further test should be done using a sufficient number of
specimens with possibility to vary several other
parameters in order to properly define welding procedures
of welded joints necessary for production of NGS.

References
[1] GOA Institute, Literature for IWE curs
Moduo2, 2014
[2] Singriesu S. R., "The Finite Element Method In
Engineering- Fourth Edition", Elsevier Butterworth
Heinemann, 0-7506-7828-3.
[3] Standard ASTM E8.
[4] Werkstoff-Leistungsblatt, Schweibraer, ChromMolybdan-Vanadiun-Vergutungsstahl,
Blache,
Platten und Bander-1.7734, March 1982
[5] European Aviation Safety Agency, Certification
Specifications for Normal, Utility, Aerobatic, and
Commuter Category Aeroplanes CS-23

Picture17. Yield strength for specimens of steel 17734.4

Picture18. Tensile strength for specimens of steel


17734.5

130

INFLUENCE OF PILOTS AVERAGE BODY MASS INCREASING ON


BALANCE OF LIGHT PISTON TRAINING AIRCRAFT
ZORICA SARI
Military Technical Institute, Belgrade, vti@vti.vs.rs
ZORAN VASI
Military Technical Institute, Belgrade, vti@vti.vs.rs
VOJISLAV DEVI
Military Technical Institute, Belgrade, vti@vti.vs.rs
BORIS GLAVA
Military Academy, Belgrade, risbo@vektor.net

Abstract: The procedure of calculation and experimental determination of aircraft mass and balance, design limitation
defined by international civil and military airworthiness regulative and requirements, statistical analysis of average pilot
body gain weight (mass) during the last sixty years, and their influence on balance of light piston trainer aircraft were
shown in the paper. The practice used for aircraft mass and balance calculation during the phase of aircraft conceptual
design, and experimental determination of aircraft mass and balance after production of prototype and series aircraft were
explained in first part of the paper. The short overview of related definitions and design limitations valid in international
civil and military requirements was shown. The statistical analysis of pilots body gain weight during the last several
decades was shown in second part of the paper. The importance of recording and updating of exact average pilots body
mass data assigned for piloting the specific aircraft and its conformance to current airworthiness and requirements were
explained. The real pilots mass data as well as mass and balance data of produced aircraft were used.
Keywords: aircraft, mass, balance, design, pilot, body dimensions.
years the airworthiness requirements in the world have been
changing, amending, and developing, all of these in order to
improve the flight safety.

1. INTRODUCTION
There are many factors that lead to efficient and safe
operation of aircraft. Mass (hereinafter: weight) and balance
is one of the most important factors affecting safety of
flight. An overweight aircraft, or one whose center of
gravity (CG) is outside the allowable limits, is inefficient
and dangerous to fly. The responsibility for proper weight
and balance control begins with the engineers and designers
and extends to the pilot who operates and technicians who
maintains the aircraft [1]. There are several equally
important elements related to weight and balance: the proper
calculation during aircraft design, the weighing of the
produced aircraft ready to fly, the maintaining of the weight
and balance records, and the proper loading of the aircraft
before and during the flight.
Design, production, maintenance and usage during the
service of an aircraft are complex activities and require
completely dedication of all involved in these activities.
Any inaccuracy in any one of these activities nullfies the
purpose of the whole effort.

There have not been significant and frequent changes in


airworthiness requirements in weight and balance sections
concerning the values for standard pilot and flight crew
weight. In the meanwhile, the requirements related to pilot
set (helmet, parachute, suit, etc.) have been changing
continuously becoming overall heavier, more complete with
the aim of becoming more safe. Moreover, historically the
average (mean) body weight is becoming greater, so the
human body weight increased by about 33% and the average
height increased by 10.11 cm during last fifty years [9].
During this period, the differences in weight between values
of standard pilot weight in airworthiness requirements and
actual pilots weight have become significant. For that
reason the designers have to take into account exact data of
current pilots weights or use different design margin of
safety in order to provide safe and quality flight of aircraft.
Keeping in mind that aircraft service life is always more
than 25 years, the designer must take into consideration the
future pilot generations that will fly the aircraft.

This paper is a part of research and development activities


related to design of light piston trainer aircraft for basic and
primary training that have been occurring in Military
Technical Institute for last sixty years. During all of these

Recent reports from the USA Aviation Safety Reporting


System (ASRS) provides some insight on the safety
implications of continued reliance on outdated human
strength/control force data [18]. Recent reports show that
131

INFLUENCEOFPILOTSAVERAGEBODYMASSCHANGINGONBALANCEOFLIGHTPISTONTRAINERAIRCRAFT

using of obsolete data of pilots anthropometric measures


during design of new aircraft could provoque adverse
consequences to pilot and aircraft safety. Hence, nonavailability of anthropometric data that is representative of
the target population should be considered as a safety
hazard [18].

OTEH2016

for example, minimum fuel capacity for continuous


operation for certain time duration, required minimum crew,
full capacity of oil, etc. The minimum weight is also
important and it is defined in [4] as the minimum sum of
empty weight (aircraft, fixed ballast, unusable fuel, full
operating fluids, such as oil, hydraulic fluid, other fluids
required for normal operation of aircraft systems), the
weight of 77 kg for each occupant for normal and commuter
category aeroplanes, and 86 kg for utility and acrobatic
category aircraft, and the weight of fuel necessary for
certain time of operation defined by specific airworthiness
requirements. For military pilots there are additional flight
equipment such as standard parachute (8 kg), helmet (1.9
kg), and pilot suit and shoes (2 kg) with a total weight
greater than 12 kg.

Motivation of the authors of the paper has emerged from the


identification of mentioned phenomenon and professional
need to indicate it and to technically research the topic for
future generation of designers. Aim of the paper is to
emphasize the need for careful approach to this design
challenge and establish the basis for new design
methodology that will enable the right way for design of
safe aircraft with optimal weight and balance which will
meet stability and controllability requirements, taking into
account body weight gain and stature height of current and
future pilots, and actual airworthiness requirements.

APS weight of the aircraft is defined [4] as aircraft


prepared for service weight (a fully equipped operational
aircraft, empty, without crew, fuel or payload). Arm is the
horizontal distance from the Datum (reference) plane
(chosen by designer) to the CG of a chosen item. Center of
gravity (CG) is defined as the point at which an aircraft
would balance if suspended. Technically, at the CG the
sum of all weight of force moments about the CG is equal to
zero. It means that if the weight of any component is Wn and
its distance from a reference datum is xn, then

The overview of design limitation defined by international


civil and military airworthiness regulative and requirements
was shown in Section 2. The weight engineering and
technical basis of weight and balance calculations and
predictions during design of the new aircraft were shown in
Section 3. Section 4 describes the practical activities that are
necessary during experimental measuring of weight and
balance of fabricated aircraft. Statistical analysis of average
pilot body gain weight (mass) during the last fifty years was
explained in Section 5. Discussion and conclusion of the
topic were presented in Section 6 and Section 7.

wx w1 x1 w2 x2 ... wn xn = 0

(1)

Where w is the weight of the whole and x the distance of


the CG from the datum. CG limits (acceptable fore-and-aft
limits of the CG) is defined as extreme locations of the
center of gravity within which the aircraft must be operated
at a given weight [6]. Fig.1 shows the view of the weight
and balance terms and measures.

2. WEIGHT ENGINEERING
Weight prediction is the engineering task of accurately
predicting the weight of an aircraft, well in advance of the
time the actual weight can be determined by placing the real
aircraft on scales. Aircraft weight prediction is always a
mixture of rational analysis and statistical methods.
Statistical weight equations for many components are
usually written in exponential form which will be seen in
weight prediction [3].
Importance of weight and balance in designing of safe
aircraft is emphasized in [4] and these are defined to be
implied that each stated requirement must be met at each
appropriate combination of weight and center of gravity
(CG) within the range of loading conditions. Compliance of
all requirements must be met and proved by tests or by
calculations, and by systematic investigation of each
probable combination of weight and center of gravity. The
load distribution on aircraft during flight must not exceed
the selected limits, the limits at which the structure is
proven, and the limits defined by adequate flight
requirements.

Figure 1. Main aircraft weights (pilots, equipment,


engine, fuel, etc.) and its arm distances from Datum plane
The CG must be established in both the longitudinal and the
vertical direction. The aircraft designer must demonstrate
that in all likely loading conditions the actual CG will
remain inside the CG limits, without undue penalties in the
form of loading restrictions [3]. The location of the CG
may be expressed in terms of length from a Datum plane
specified by the aircraft designer (manufacturer), or as a
percentage of the M.A.C. (the chord of a rectangular wing,
with the same span, and having similar pitching moment
characteristics [6]). Usually the M.A.C. is a datum to which
the aerodynamic center of the wing is referred and is the
primary reference for longitudinal stability considerations.

2.1. Weight and balance terms


Concerning the weight limits, the maximum weight of the
aircraft is defined as the highest authorized weight of the
aircraft and all of its equipment as specified for an aircraft
and must not be more/less than the specified values. These
values are driven by different parameters, such as the lift
that wing can provide under operation conditions, the
structural strength, and other flight requirements related to,
132

INFLUENCEOFPILOTSAVERAGEBODYMASSCHANGINGONBALANCEOFLIGHTPISTONTRAINERAIRCRAFT

OTEH2016

2.2. Effect of Location of Center of Gravity on


Stability

3. CALCULATION OF WEIGHT AND


CENTER OF GRAVITY

The stability and control characteristics of an aircraft


determine its ability to fly smoothly and a constant
attitude and air speed, to recover from the effects of
atmospheric disturbances, and to respond adequately to the
control of the pilot [5]. Stability in an aircraft is always
desirable, but not necessarily in the maximum possible
degree. Stability has a direct effect on controllability. The
controllability of an aircraft determines the ease of operating
its controls and/or its responsiveness to the controls.
Mechanical stability of the aircraft has two components:
static stability and dynamic stability. One of the factors that
affect the static, longitudinal stability of an aircraft is the
location of the center of gravity (CG) relative to the neutral
point (the point about which the pitching moment of the
aircraft remains constant, even when the angle of attack of
the aircraft is changed) of the aircraft. If the CG of an
aircraft is located ahead of the neutral point (for aircraft
with classical aerodynamically scheme, such as example
aircraft light piston aircraft; Figure 1), the aircraft
possesses positive static longitudinal stability.

Aircraft design is a series of compromises. Every design


alternative has a weight effect (positive or negative). Weight
control can be described as the process by which the
lightest possible aircraft is derived within the constraint of
the governing design criteria [3]. During the design of the
aircraft, in the very beginning of design process, in
conceptual phase of design, it is impossible to calculate the
exact solutions for many aircraft main characteristics and
parameters at the first attempt. The most successfully
aircraft designers make first aircraft sketch as a vision of the
side view of the aircraft and fuselage that we are most
aware of on the ground, because it embodies most of the
practical and aesthetic lines which attract or repel as at the
first sight [6]. Another good reason for drawing the
fuselage first is that it has to fit the human body, the size
and proportions of which dominate cockpit and cabin shape
and seat arrangements, windscreen and window shapes,
sizes and angles (Fig.1).

There is a range or CG limits, generally ahead of the neutral


point and the center of pressure of the aircraft, in which the
CG must be located to preserve safety of flight. In this range
there is an optimum point at which the aircraft is the most
stable, most controllable, and most effective. Any location
of the CG either forward or aft of this optimum point
endangers safety by reducing the stability, controllability,
and effectiveness of the aircraft to an extent comparable to
its distance from the optimum point [3].
Excessive stability (CG far forward from neutral point)
reduces maneuverability and renders the aircraft stiff, or
heavy, to the controls. Too little stability (CG approaching
neutral point, that is to say, approaching neutral stability)
renders the aircraft too sensitive and too responsive to the
controls, reduces the control feel received by the pilot, and
increases the risk of over-control by the pilot. The designer
must achieve a satisfactory balance, or compromise,
between stability and controllability. Therefore, it is very
important to design the optimal position of the CG relative
to neutral point for all possible loads and configurations.

Figure 2. Center of gravity envelope diagram


The estimation of the weight of a conceptual aircraft design
is a critical part of the design process. The weights engineer
interfaces in this phase with all other engineering groups,
and serves as the referee during the design process [2].
There are many levels of weight analysis. In the beginning
designer must use crude statistical techniques for
estimating the aircraft weight. More sophisticated weights
methods that come later on in design process, estimate the
weight of the various components of the aircraft and then
sum for the total aircraft empty weight [2]. Commonly
used method uses detailed statistical equations for the
various aircraft components. Aircraft components are
usually grouped in structured group (wing, tail planes,
fuselage, landing gear, air induction system), equipment
group (flight controls, instruments, hydraulic, pneumatic,
electrical, avionics, etc.), propulsion group (engine,
accessory gearbox, exhaust system, cooling provision,
engine controls, fuel system, tanks, etc.), and payload group
(pilot, fuel, oil, cargo, ammunition, pylons, expendable
weapons, etc.) in order to make calculation more easier.

When an aircraft is rigid, stability can be measured in terms


of fixed geometrical factors, like configuration and location
of CG. Real aircraft, however, bend and twist. Their skins
deform, and there is a constant aero elastic interaction
between what the pilot and other disturbances do to the
apparent constants [10]. For example, fuel is consumed and
the CG moves. Aerodynamic pressures suck the skin out
here and push it in there, altering profiles of the wing, tail
surfaces and fuselage. In all of these circumstances CG
margins cease to have strict relevance. However, all
mentioned situations, and much more, such as effects of
drag, thrust and prop-wash, are out of the scope of this paper
and it will not be here explained.
If balance is not properly controlled, the CG may be in
such a position that it will impose loads upon the structure
of the aircraft that are substantially higher than those
computed for the normal CG location [3].
133

INFLUENCEOFPILOTSAVERAGEBODYMASSCHANGINGONBALANCEOFLIGHTPISTONTRAINERAIRCRAFT

In a group weight calculation, the distance to the weight


datum (arbitrary reference point) is included, and the
resulting moment is calculated. These are summed and
divided by the total weight to determine the actual CG
location. The CG varies during flight as fuel is burned off
and weapons expended [2]. To determine if the CG
remains within the limits established by an aircraft stability
and control analysis, a CG-envelope plot is drawn during
aircraft design (Fig.2). The CG must remain within the
specified limits during the whole flight.

OTEH2016

4.1. Weight determination


Aircraft basic empty weight (BEW) is the sum of three
recorded pickup weights on three supports:
w = w1 + w2 + w3 ,

(2)

Where is:
w1 recorded forward support weight

Fig.2 shows diagram with three curves (two pilots in the


aircraft; one pilot in front cockpit; one pilot in rear cockpit) that
describe successive positions of aircraft CG during a typical
flight, starting from take-off, retracting of landing gear,
spreading of fuel with reducing the weight of the aircraft,
extension of landing gear, and landing with final weight
Wlanding. Diagram is given for standard pilot weight. In the case
of pilot body weight gain more than its standard weight, or
calculated weight, at least three possible critical points could be
identified, as it is shown on the diagram (Fig.2).

w2 , w3 recorded rear support weight


Aircraft weight with fuel must be checked on the same way.
All weight determination must be checked at least three
times.

4. EXPERIMENTAL DETERMINATION OF
WEIGHT AND BALANCE
Every produced aircraft must be experimentally weighed to
establish an accurate base for weight and balance control in
the flight stage. In establishing the basic weight and CG of
each aircraft a complete equipment inventory must be
conducted, [3]. All of the required equipment must be
properly installed, and there should be no equipment
installed that is not included in the equipment list. Aircraft
weight must be experimentally weighed inside a hangar
(where wind cannot blow over the surface and cause
fluctuating or false scale readings), [1]. Aircraft equipment
completeness must be checked and approved by official
person. Aircraft must be horizontally leveled (its level flight
attitude) standing on the jacks having weight scale pickups.
The recommended practice for aircraft weighing is
measuring on three pickup scales with extended landing
gear, all command surfaces positioned in neutral position
and closed canopy (Fig.4).

Figure 4. Forward support scales; aircraft ready to be


weighed

4.2. Determination of Empty Weight Center of


Gravity
Position of the aircraft empty weight CG is calculated using
the equation (3):
x=

There are two basic types of scales used to weigh aircraft:


scales on which the aircraft is rolled so that the weight is
taken at the wheels, and electronic load cells type scales
where a pressure sensitive cell are placed between the
aircraft jack and the jack pads on the aircraft [1] (Fig.5).
Electronic load cells are usually used when the aircraft is
weighed by raising it on jacks.

w1 D + ( w2 + w3 ) D3
,
w

(3)

Where is:
D - Distance of the forward weighing point(s) from the
datum (reference) plane,
D3 - Distance of the rear weighing point(s) from the datum
(reference) plane.
D = A D1 ,

(4)

D3 = D + D2 ,

(5)

A - Distance of the M.A.C leading edge point from the


datum (reference) plane,
D1 - Distance of the forward weighing point from M.A.C.
leading edge point,
D2 Distance between two weighing points, forward and
rear (Figure 3).

Figure 3. Center of gravity determination


134

INFLUENCEOFPILOTSAVERAGEBODYMASSCHANGINGONBALANCEOFLIGHTPISTONTRAINERAIRCRAFT

OTEH2016

the accommodation of approximately the 20th percentile


(measured in that time) female of about 1.5 m tall and
larger, [2]. This obviously affected and still affects the
detailed layout of cockpit controls and displays, and aircraft
balance, [11].
Anthropometric data are required and necessary as a basic
input in the human factors engineering of arranging of
military aircraft cockpits. However, according to [7], there
are several common errors to be avoided by designers
when they apply anthropometric data to design of cockpit
arrangement. These are: (1) designing to the midpoint
(50th percentile) or average (standard men) (The 50th
percentile or mean shall not be used as design criteria as it
accommodates only half of the pilots), (2) the misperception
of the typical sized person, (3) generalizing across human
characteristics, and (4) summing of measurement values for
like percentile points across adjacent body parts [7].

Figure 5. Weighing equipment; electronically scale


When the CG of an aircraft falls outside of the limits, it can
usually be brought back in by using ballast. Temporary
ballast in the form of lead bars must be secured to the
airframe so it cannot shift its location in the flight.
Temporary ballast can be also used in the case when the
weight of the pilots exceeds the calculated weight and when
it endangers the aircraft balance. In the case of a repair or
modification (alteration) when the aircraft CG falls outside
of its limits, permanent ballast (usually blocks of lead) can
be installed [1].

5.1. Historical data


Many countries in their air forces occasionally organize
extensive surveys of aviators and pilots anthropometric
dimensions in order to have accurate measures for designing
or flying on trainer, transport or fighter aircraft. These
measuring have been conducted to different purposes, such
as cockpit arrangement, pilot suit making, or any other
purpose. For example, according to [13], an anthropometric
survey finished in 1966 encompassed 200 Royal Air Force
and Royal Navy aircrew flying F-4 Phantom. The 44 human
body measurements were taken on each subject (pilot), such
as age, weight, standing height (stature), chest girth, torso
hoop, ankle, knee pivot height, wrist height, axillary height,
knee height, sitting height, arm reach, shoulder breadth,
shoe size, and many other. Subjects were measured wearing
only their own underpants. Some subjects were measured
more than once as a check on repeatability of measurement
by the same operator and as a comparison of performance
between the two operators who shared the task of
measuring. A percentile tables for each of the separate
dimensions was presented in the survey. Tables include
mean, standard deviation, coefficient of variation and range.

5. PILOT ANTHROPOMETRIC DATA


The anthropometric design requirements for pilot stature in
airplanes and helicopters in USA were defined by Civil Air
Regulations [CAR 4b.353(c) and CAR 7.353(b)] in
1950s. CAR stated that airworthiness compliance shall be
demonstrated for individuals ranging from 157.8 cm to
182.4 cm in height [18]. These design heights for stature
remained in effect until 1975 when the regulation was
changed, just for transport category airplanes, to increase
the maximum crewmember heights to be considered from
182.4 cm to 190.0 cm for the design of cockpit controls.
The traditional aircraft design requirement for a 5th
percentile female to a 95th percentile male that have been
adopted by the aviation industry, was based on historical
military standards. The design of aircraft for the 5th to 95th
percentile is only valid if there is a specific definition of the
population basis reflecting the target pilot population, and if
the specific relevant body measurements and muscular
strengths is defined [18]. However, an increasing number
of aviation designs have utilized today a multivariate
approach of body dimension combinations versus univariate
percentiles [20].

A comparison of some of the anthropometric data obtained


from the 1966 survey with the data provided by the 1944
survey of British military aircrew is also given in [13].
Comparison table indicates that in the 22 year period
between two surveys significant changes have been taken
place. For example, pilots average weight has increased by
8.6 kg and stature (height) by 2.97 cm, etc. The maximum
frequency distribution for the pilots height is about 177.5
cm, and for the pilots weight is about 73 kg [13]. Height
range is 161.8 cm minimum and 195 cm maximum.
Weight range is 57.1 kg minimum and 108.4 kg
maximum.

Raymer in [2] argues that in aircraft and cockpit conceptual


design it is first necessary to decide what range of pilot
sizes to accommodate. For most military aircraft, the
design requirements include accommodation of the 5th to the
95th percentile of male pilots (i.e. a pilot height range of
1.66 m to 1.86 m in that time). Due to the expense of
designing aircraft that will accommodate smaller or larger
pilots, the services exclude such people from pilot training.
Today women pilots now enter the Serbian military flying
profession in a certain numbers. Raymer predicted in 80-ties
that modern (in 21st century) military aircraft would require

In [14] results of the 1987-1988 anthropometric survey of


1744 men and women (of different races, such as,
Caucausian White, Afroamerican Black, Hispanic,
Asian/Pacific, and others) of US Army personnel were
presented in the form of the summary statistics, percentile
data and frequency distributions. Dimensions given in this
report include 132 standard human measurements (direct
and derived) made in the course of the survey.
135

INFLUENCEOFPILOTSAVERAGEBODYMASSCHANGINGONBALANCEOFLIGHTPISTONTRAINERAIRCRAFT

Bradtmiller in [16] made comparison between civilian and


army people that were measured in period between 19871988 and 2010-2012. It was concluded that stature (height)
was not changed, but the weight increased drastically. He
concluded that changes over time are similar between US
army people and civilians. The conclusion was also noticed
that for new design purposes it is more useful to use 3D
scans (94 human dimensions) of human body that was
developped under the project ANSUR II, [16].

OTEH2016

According to literature, the results indicate that significant


differences in body dimensions (height, weight, etc.) do
exist among men of different aero-ratings and different ages
(cadets, young officers or aged officers).

5.2. Serbian anthropometric data


One of the most recent domestic historical research and
surveys were conducted by Glava in his doctoral thesis,
[9], where the anthropometric data were presented for last
119 years. According to [9], the first systematic data of
human heights (stature) and weights in Serbian military
sector (soldiers, cadets and officers) where recorded during
1897 and 1898. The 20-22 years-old recruits had average
height 168.7 cm and weight 63.7 kg. Since those years to
the present day the average height has increased for 12.5
cm. Moreover, the average body weight of Yugoslav
working age people was 77.2 kg in 1964 and Serbian 87.3
kg in 2014. The average weight of officers in 1980 and 1999
compared to the present day average weight is increased by
9 kg and 3.9 kg, respectively.
According to researches published in [9], the human average
height of current generation of cadets (21.1 years-old) is
the same as the average height of aged people (42.7 yearsold), which is 180.96 cm. In addition, the human average
weight of current generation of cadets (measured wearing
only their own underpants, without uniforms) is 78.5 kg
compared to 88.1 kg of the average weight of aged people
(lower and senior rank officers). Researches also shown that
the average heights are in the range from 174.65 cm to
187.27 cm, and the average weights are in the range from
75.68 kg to 99.38 kg for the 68% of Serbian officers.
Comparing these anthropometric data to other recent
domestic unpublished data it can be seen that the
anthropometric measures of current military pilots are
slightly different to data presented in [9].

Figure 6. Officers' body gain weight; 19442014


For the purpose of this paper only two dimensions, body
weight and height (stature), were analyzed given that the
pilot body weight mainly affects the aircraft balance and
pilot body height mainly affects the cockpit arrangement
and indirectly aircraft balance, are shown on Figures 6 and
7. Fig.6 shows officers (army and air forces) body weight
evolution according to anthropometric surveys conducted
since 1944 to 2012 in UK and USA air forces. Fig.7 shows
officers height (stature) evolution according to the same
surveys, [13-14, 16-17, 19-23]. Additionally, the data of
Serbian officers anthropometric measures (weight and
height) were presented on Figures 6 and 7, marked as 2014
data, using data published in [9].

Average body height of current Serbian cadets is


approximately the same as the current younger and senior
officers, and it is not statistically significant [9]. The
difference between the body weights is statistically
significant and it is expected that the current cadets and
young officers will increase their weight by 10 kg in next
three decades. Concluding, the new-coming generation will
be, if not taller, but with greater body weight, with
corresponding (insignificant) need for new cockpit
arrangement and (considerable) influence on weight and
balance of aged aircraft.

5.3. Digital Human Modelling


Since the 1980s, 3D scanning technology has emerged as a
tool to measure the size and shape of the human body as
well as the linear dimensions that traditional anthropometry
provides. The technology has improved Digital Human
Modeling (DHM) and CAD applications required for
equipment and workstation design. 3D scanning technology
gives users the ability to extract new measurement
information after the data has been gathered from the
subjects, as well as the advantage of using 3D computer
models for concept visualization or for rapid prototyping
[20].

Figure 7. Officers' height (stature) evolution; 19442014


For many years the standard man weighed 77 kg, but people
are qrowing bigger. The western country male is heavier
and bulkier than his pre-World War 2 counterpart. Racial
variations in human size and shape can make a considerable
difference to weight and balance calcualtions, to fuselage
proportions, and to cockpit arrangement. The average
weight in 80-ties is nearer 82 kg clothed, with deviations of
14%. Geometrical measurements can also vary 5% [10].

136

INFLUENCEOFPILOTSAVERAGEBODYMASSCHANGINGONBALANCEOFLIGHTPISTONTRAINERAIRCRAFT

Digital human modeling tools are used to reduce the need for
physical tests and to facilitate proactive consideration of
ergonomics in virtual product and production development
processes [15]. DHM tools provide and facilitate rapid
simulations, visualizations and analyses in the design process
when seeking feasible solutions on how the design can meet set
ergonomics requirements. DHM software includes a digital
human model, also called a manikin, i.e. a changeable digital
version of a human. An important part of DHM systems is
anthropometry, the study of human measurements, and the
functionality of creating human models.
New scanning technologies along with more sophisticated
statistical methods for matching/forecasting valid data from an
existing database represent a significant improvement over
legacy 1-D (i.e. tape measure, calipers) body measuring
techniques. Body scanning technology has proven to be less
expensive, faster, and a more reliable way to measure,
especially for large surveys, [18]. The obvious advantages of
3D scan images for undertaking the measures of target (pilot)
population include the design of better-fitting aircraft cockpits,
and faster response time between measurement and delivery to
the design offices. Three-dimensional human analogues created
from scanned images have an almost infinite number of uses in
the design of workplaces such as aircraft cockpits where
accommodation, lines of vision, and ability to reach hand and
foot controls can be tested on a computer screen. Fully dressed
and equipped pilots with helmets can also be scanned for input
into CAD models for assessing workplace interactions [19].
As an anthropometric tool, 3D scanning complements
traditional methods (1D) in two ways. First, a pilot-participant
can be scanned in a matter of seconds, and the scanned images
become permanent records. Users can return to them as many
times as needed to extract new dimensions or to employ them
in the creation of computer models. Second, the relationship
of one dimension to another, or to several other dimensions, is
clearly apparent. This aids in understanding body shape, as well
as body size [19].
Choi et al. in [20] in 2011 made an extensive survey on 640
male and 60 female aircrew measuring 60 traditional body
dimensions. The measured dataset was compared to 1967
USAF and the Joint Strike Fighter datasets and the results of
these comparisons show that the aircrew population is
growing heavier and exhibiting some increased
measurements related to increased mass. Some of the
measured dimensions are shown in Figures 6 and 7.

OTEH2016

Fig.8 shows a standard pilot (male) without parachute as it


is defined in [10] with dimensions of a standard pilot given
in [10] dated in 1955.

6. DISCUSSION
According to authors experience, there are several
recommendations in the area of collection of relevant data
of pilot population measurements, such as:
Execute regular measuring surveys to all pilots and
focused groups of pilots (helicopter pilots, fighter pilots,
transport aviation pilots, cadets, female, male, etc.),
Make measuring surveys using fast and reliable modern
optical digital 3D scanning technology which gives the
ability to extract new measurement information,
Use mathematical statistical methods and prediction
methods for analysis of focused pilots group in the case of
the lack of up-to-date measured data,
Use modern computer methods for 3D digital human
modeling (DHM) and CAD applications for design,
Use new multivariate approach of body dimension
combinations insted of unvariate percentiles in the case
whe there is not a specific definition of the population
basis reflecting the target pilot population, and if the
specific relevant body measurements and muscular
strengths is not defined.
Focused group and measured human dimensions must be
in accordance to the functions that must be accomplished
and equipment to be managed.
Recommendations in the area of aircraft design are as
follows:
Make more iterations for weight and balance calculations
during aircraft conceptual design,
Use modern 3D computer methods for design of aircraft
structure, equipment, pilot cockpits, cockpit arrangement
and weight and balance engineering,
Measure all produced aircraft experimentally after any
changes, modifications or equipment changes.
During aircraft design always have in mind that the
aircraft would be in service for next 40 years with pilots
(and crew and passengers) which will be probably heavier
and taller than their counterparts in the moment of the
aircraft design. According to that, it is necessary to take it
into account during weight engineering and cockpit
arrangement.

7. CONCLUSION
This paper emphasizes the importance of recording and
updating of exact average pilots body mass data of pilots
population assigned for piloting the specific aircraft. The
paper explains the basic methods that are used during
aircraft design and regular maintenance during service in the
area of weight engineering and balance calculation. The
paper also stresses possible discrepancies or conformance to
current airworthiness and requirements in the same area.
Figure 8. Geometrical 3D construction of a standard man
(without parachute) [10]

Aircraft designers often utilize legacy anthropometric


databases that may not reflect the present target pilot
137

INFLUENCEOFPILOTSAVERAGEBODYMASSCHANGINGONBALANCEOFLIGHTPISTONTRAINERAIRCRAFT

population. Although most current aviation designs have


utilized legacy anthropometric data (dated thirty or more
years ago), there are a sufficient number of valid existing
anthropometric databases that include body dimension
measurements and weight (mass) measurements compiled
from data sets gathered through large surveys that are
representative of the target pilot population. Furthermore,
new scanning technologies, 3D modeling using Digital
human modeling (DHM) tools, along with more
sophisticated statistical techniques for matching/ forecasting
target populations from legacy anthropometric databases,
have enhanced the ability to effectively and efficiently
gather these data on a periodic basis, hence anthropometric
data that are not representative of the pilot population
should no longer be accepted as a limitation in aircraft
design, [18].

OTEH2016

[10] AvP970 2, Book 2, App 8, Design Requirements for


Aircraft for the Royal Air Force and Royal Navy,
Ministry of Aviation, 1955.
[11] Anonymous, Anthropometry of Flying Personnel,
Wright Air Development Center, TR 52-321, 1954.
[12] DOD-HDBK-743A, Military Handbook Anthropometry
of U.S. Military Personnel, Department of Defence,
USA, 1991.
[13] Simpson, R.E., Bolton, C.B., An Anthropometric
Survez of 200 R.A.F. and R.N. Aircrew and the
Application of the Data to Garment Siye Rols,
Engineering Physics Dept., R.A.E., Farnborough,
1970.
[14] Gordon, C.C., Churchill, T., Clauser, C.E., Mc
Conville, J.T., Tebbetts, I., Walker, R.A., 1988
Anthropometric Survey of US Army Personnel:
Methods and Summary Statistics, Technical Reports
NATICK/TR-89/044, United States Army Natick
Research, Development and Engineering Center
Natick, 1989.
[15] Brolin, E., Hanson, L., Hogberg, D., Ortengren, R.,
Conditional Regression Model for Prediction of
Anthropometric Variables,
[16] Bradtmiller, B, Anthropometric Change in the US
Army: Using the 2012 Army Data for Civilian Design,
The 19th Annual Applied Ergonomics Conference,
2016.
[17] Gordon, C., US Army Anthropometric Survey
Database: Downsizing, Demographic Change, and
Validity of the 1998 data in 1996, Technical Report
NATICK/TR-97/003, 1996.
[18] Joslin, R.E., Examination of Anthropometric Databases
for Aircraft Design, Proceedings of the Human Factors
and Ergonomics Society 58th Annual Meeting, 2014.
[19] Gordon, C.C., Blackwell, C.L., Bradtmiller, B.,
Parham, J.L., Barrientos, P., Paquette, S.P., Corner,
B.D., Carson, J.M., Venezia, J.C., Rockwell, B.M.,
Mucher, M., Kristensen, S., 2012 Anthropometric
Survey of U.S. Army Personnel: Methods and
Summary Statistics, Technical Report NATICK/TR15/007, 2014.
[20] Choi, H.J., Coate, A., Selby, M., Hudson, J.,
Whitehead, C., Aircrew Sizing Survey 2011, Technical
report, AFRL-RH-WP-TR-2014-0113, 2014.
[21] MIL-STD-1472D, Human Engineering Design Criteria
for Systems, Equipment and Facilities, 1989.
[22] Weight, Height, and Selected Body Dimensions of
Adults, USA 19601962, National Center for Health
Statistics, 1965.
[23] Hudson, J.A., Zehner, G.F., Robinette, K.M., JSF
Caesar: Construction of a 3-D Anthropometric Sample
for Design and Sizing of Joint Strike Fighter Pilot
Clothing and Protective Equipment, AFRL-HE-WPTR-2003-0142, United States Air Force Research
Laboratory, 2003.

Authors firmly advocate a multidisciplinary approach to the


process of aircraft design in the area of weight and balance
engineering and cockpit arrangement. It means that the
designers must use modern multivariable optimization of
many significant parameters that have to drive optimal
aircraft design. Instead, using of old and obsolete pilots
measures during design of new aircraft could lead to
jeopardizing of aircraft safety. The authors also emphasize
the need for establishing a systematical methodology for
continual recording of anthropometric data for focused
population (pilots) which will contribute the optimal aircraft
design process.

References
[1] FAA-H-8083-1, Aircraft Weight and Balance
Handbook, US Department of Transportation, Federal
Aviation Administration, Flight Standards Service,
2007.
[2] Raymer, D.P., Aircraft Design: A Conceptual
Approach, AIAA Education Series, Air Force Institute
of Technology, 1999.
[3] Niu, M. C-Y., Airframe Structural Design, Practical
Design Information and Data on Aircraft Structures,
Conmilit Press Ltd., 1988.
[4] CS 23, Certification Specifications for Normal, Utility,
Aerobatic, and Commuter Category Aeroplanes,
European Aviation Safety Agency, Annex to ED
Decision, 2012.
[5] Crabs, C. C., Aircraft Stability and Control, Aerospace
Engineer and Pilot, Aircraft Operations Branch.
[6] Stinton, D., The Design of the Aeroplane, BSP
Professional Books, Oxford, 1983.
[7] HFDS, Anthropometry and Biomechanics, (Amended
2009), Chapter 14, 2003,
[8] Jenkinson, L.R., Marchman III, J.F., Aircraft Design
Projects, Butterworth Heinemann, Oxford, 2003.
[9] Glava, B., Motor skills, morphological status and life
habits among members of Serbian Army Forces,
(doctoral thesis in Serbian), Faculty of Sport and
physical education, University of Belgrade, 2015.

138

PROTOTYPE SOVA DEVELOPMENT:


AIRCRAFT LYFE CYCLE EXTENSION
VANJA STEFANOVI
Military Technical Institute, Belgrade, vti@vti.vs.rs
MARIJA BLAI
Military Technical Institute, Belgrade, vti@vti.vs.rs
MARINA OSTOJI
DOO UTVA AI, Panevo, majce74@yahoo.com
TONKO MIHOVILOVI
DOO UTVA AI, Panevo, tonkojetonko@gmail.com
DRAGAN ILI
Jugoimport SDPR, Belgrade, dragan_ilic01@yahoo.com

Abstract: With the aim of a new aircraft development, single engine four-seater aircraft with fixed landing gear,
analysis of the existing structure has been conducted. Analysed subject was earlier produced UTVA 75 structure, which
has been used as the basis of the new aircraft prototype. Elements of the structure were tested with different NonDestructive Testing (NDT) methods. This paper describes condition of existing structure and its test results, as well as
necessary repairs and replacement of the damaged parts, in order to extend life cycle of the structure.
Keywords: life cycle extension, aircraft structure, structure repairs, NDT methods.

1. INTRODUCTION
Making a general purpose airplane, suitable for any
purchaser or any use, is impossibility. However, it is
frequently possible to arrange a design which would simplify
future changes without sacrificing either structural or
aerodynamic efficiency or taking a weight penalty [1]. The
design process must not only address interactions between
traditional aerospace disciplines (e.g. aerodynamics,
structures, controls, propulsion), but should also account for
life cycle disciplines (e.g. economics, reliability,
manufacturability, safety, supportability, etc.). These
disciplines can bring a variety of uncertainties of differing
natures to the design problem, especially as innovation
occurs within and amongst the disciplines [2]. In a process of
upgrading an existing one, some of the traditional aerospace
disciplines are less reviewed, due to the already defined
aerodynamic with the current structure. Nevertheless
controls and propulsion are often revised, to follow
requirements of the customers and to be concurrent on the
market. When it comes to life cycle disciplines in a
process of upgrading, they should be considered completely
from the start (e.g. manufacturability of the prototype and
possible serial production component supportability, etc.)
and then decision should be made whether the new product
is affordable or not.
With fruitful tradition of designing and manufacturing metal
structure aircrafts (Galeb G-2, Supergaleb G-4, Utva 75,
Lasta and now Sova), mostly for training military pilots,
139

Serbia has vast amount of experienced experts all


generations, who can deal with new challenges in this area.
Aircrafts were designed and produced for our needs and also
numbers are sold to foreign countries. Positive feedback
from customers all around the world, and fact that many
aircrafts are still in operational use confirms our production
capabilities and proficient design approach to different type
of aircrafts. In a case of Utva 75 aircraft (picture 1), 140 were
produced and plenty are still in use.

Picture 1. Utva 75 aircraft


All metal Utva 75 aircraft, certified according to FAR 23,
is side-by-side seating configuration with an advantage
that pilot and instructor can see each other's actions. With
the conversion in four-seater aircraft, it will be allowed to
future pilots, being in the back seats during the flight, to
learn from the instructor and pilot by watching. Special

OTEH2016

PROTOTYPESOVADEVELOPMENT:AIRCRAFTLYFECYCLEEXTENSION

advantage of this aircraft version (four-seater Sova),


which can be used for initial training of several trainees,
is considerably reducing the cost of the training. That is
the reason why nowadays popularity of the four-seater
trainer is in increase all around the world.

NDT methods were used in a process of examining


structure for life-cycle extension. All parts of the structure
that are planned to be kept and especially vital ones were
subject of inspection with different methods, performed
by the firm RD Diagnostics and approved by Military
Technical Institute. Following NDT methods were used in
a process of testing structure.

With the aim of Utva 75 aircraft conversion into fourseater aircraft Sova (Utva 75 A41M), process of creating
prototype and modification of the existing structure is
shown in this paper.

Visual inspection (VT) is probably the most widely


used of all the nondestructive tests. VT was
conducted on all parts, using tools: a magnifying
glass zoom 5x and endoscope. Magnifying glass was
used to estimate the depth of corrosion in places
where it was discovered.

Endoscope (ES) was used for testing all inaccessible


places, the inner surfaces of the wings and the fitting
interior, the inner surfaces of the fuel tank, fins,
flaps, elevator and rudder, in order to detect hidden
defects and corrosion.

Penetrant test (PT) has its fundamental purpose of


increasing the visibility contrast between a
discontinuity and its background. It was carried out
on some fuselage fittings and pedals in order to
detect surface defect and cracks.

Edy Curent Test (ECT) is well suited for the


detection of service induced cracks usually caused
either by fatigue or by stress corrosion. Inspection
can be used with minimum of part preparation and a
high degree of sensitivity. ECT was performed on all
connections that are not going to be replaced. That
includes connections of wing and fuselage structure
e.g. rivets, bolts, the connection of all fittings and
connections of vertical and horizontal stabilizers. All
magnetic parts are also included in order to detect
cracks and surface defects.

Ultrasonic test (UT) permits the detection of small


flaws with only single surface accessibility and is
capable of estimating location and size of the defect
and also can be used for thickness measurement,
where only one surface is accessible. UT was
performed on the chosen rivets, especially those
which connect fittings with the rest of the structure,
axles and joints in order to determine defect of
homogeneity.

2. STRUCTURE LIFE-CYCLE
EXTENSION
Nowadays conversion of the aircraft, with upgraded
avionics and other systems, are the most economical
approach and frequently applied solution. Whether its a
case of business jet conversion to military trainer [3], as it
was in The United States Navy, or it was a decision of an
aircraft conversion within an air force with more limited
budget, it is money-time saving solution without loss of
efficiency.
With some past experience of Utva 75 conversion in fourseater aircraft, it was stepped into feasibility study and
conclusion was made. Utva 75 structure meets
performance goals of a new aircraft with some changes in
the cockpit area, upgraded avionics and with redesign of
propulsion and controls, prototype could be shortly
finalized. Decision was made that the prototype is going
to be conversion of the Utva 75 off-the-shelf structure,
presented in picture 2.

Picture 2. Future prototype structure


In our case, UTVA Aircraft Industry has main
responsibility for designing and producing all necessary
parts and has to ensure the project finishes on time.
Military Technical Institute and RD Diagnostics d.o.o.
were engaged in a process of examining existing structure
and possible life cycle extension.

Every element that has been inspected was firstly


detached from the rest of the structure, if it was possible,
then cleaned and prepared for testing.

2.2. Structure testing

2.1. Non Destructive Testing Methods

The aircraft chosen to be a prototype was made of main


assemblies of already produced Utva 75, whole metal
structure. Wing, horizontal and vertical stabilizer were in
storage and never used but fuselage, elevator and rudder
were in use for short period of time. The testing was
conducted in the facilities of the UTVA AI with a help of
personnel from the factory. They performed first
measures by checking the geometry of the aircraft.
Structure testing was next step of the extending life-cycle
process. In picture 3 is shown fuselage of Utva 75 aircraft.

Non Destructive Testing methods (NDT) are widely used


technics to probe structures and materials either before
they enter use or as a part of maintenance program.
Whether the aim is to observe microstructures of the
welds, measure the thickness of material or detect induced
cracks in a material NDT methods are appropriate choice
for fast and efficient inspection. NDT is now compulsory
for many aerospace firms and is a vital part of the
production process.
140

OTEH2016

PROTOTYPESOVADEVELOPMENT:AIRCRAFTLYFECYCLEEXTENSION

Picture 5. Left and right wing damaged zone


The main defect was found in the bushing of the front
spar left wing fitting (picture 6 - right). Irregularities were
found with UT and described as a lack of homogeneity.
Also some mechanical damages were found on the front
spar right wing fitting (picture 6 - left).

Picture 3. Fuselage in the UTVA AI facilities


As mentioned earlier, Military Technical Institute and RD
Diagnostics d.o.o. started inspection according to already
formed plan. In this plan main structural elements, one by
one, are anticipated to be subject of the analysis. The
structure was tested with different NDT methods.
In picture 4 main structural assemblies of the examined
structure are presented (wing, fuselage, vertical and
horizontal stabilizer, ailerons, elevator, and rudder). Some
of the major defects of examined structure, classified by
main assemblies, are presented in the further text.

Picture 6. Left left wing fitting mechanical defects


Right right wing bushing defect of homogeneity
Vertical stabilizer was in storage and its inner
structure was tested with ES and was healthy. Brackets
were tested with UT and had no inner flaws but some
mechanical damages on the surface at the lower bracket
were found. Location of the bracket and irregularities
are shown in picture 7.
Picture 4. Main examined structural elements and
assemblies
Wing was never in operational use; nevertheless the
process of examining was detailed. NDT methods that
have been applied here are: VT and ES for outer and
inner surfaces and elements (skin, spars, ribs), then
ECT for bolts, joints and UT for fittings, rivets and
axels inspection.
Location of the above mentioned defects are presented in
the picture 5.

Picture 7. Left - vertical stabilizer - damaged zone


Right - lower bracket, mechanical defects

141

OTEH2016

PROTOTYPESOVADEVELOPMENT:AIRCRAFTLYFECYCLEEXTENSION

Rudder was in operational use and had corrosion on


the torsion tube. NDT methods confirmed that the
damages are only on the surface, and with adequate
treatment and protection it can be easily removed.
Location of the torsion tube and its damage is presented
in picture 8.

Elevator was in operational use and had some


mechanical deformation of the skin which was noticed
by VT. Deformation of the upper skin near the outer rib
was appeared as a consequence of the wrong
manipulation. Place of the deformed skin is presented in
a picture 11 and mechanical damages in picture 12.

Picture 8. Left rudder - damaged zone


Right torsion tube corrosion

Picture 11. Horizontal stabilizer - damaged zone

Horizontal stabilizer had traces of surface corrosion


on the fittings that connect stabilizer with frame no.11
in the fuselage. Location of the fitting is presented in
the picture 9.

Picture 12. Deformed skin of the horizontal stabilizer


Picture 9. Horizontal stabilizer - damaged zone
NDT methods have confirmed that inner material of the
fitting is without irregularities and corrosion is on a
surface only. Damages of the fitting are shown in the
picture 10.

Fuselage was in operational use and demanded more


dedication and work in order to examine whole
structure from the first to the last frame. The important
information is that the greatest numbers of
modifications are performed on the fuselage, and the
most of the structure from the cockpit was changed
during the process of examination. All NDT methods
were used in order to locate defects of the material.
Main tests were performed on the fittings that connect
fuselage with wing and horizontal and vertical stabilizer
and engine mount with the fuselage. They were placed
on the first and eleventh frame. UT, ECT, PT confirmed
that inner structure of the material (fittings, bushings,
brackets, bolts, axels) was flawless. There were some
traces of corrosion on the first frame fittings, but it was
easily removed. Mechanical damages that have been
detected are presented in the next pictures, location in
the picture 13 and mechanical irregularities on the left
fitting and right front skin in picture 14.

Picture 10. Corrosion of the horizontal stabilizer fitting

142

OTEH2016

PROTOTYPESOVADEVELOPMENT:AIRCRAFTLYFECYCLEEXTENSION

seats and enlarged cabin doors.


Main modification was performed in the cabin space
(seats, instrument panels, controls, etc.). There is a new
digital instrument panel, pilot sticks and engine controls
(picture 16); redesigned pilot seats area, with addition of
central and side console which are also used as a support
for pilot seats. Pilot seats can be set in six position
providing suitable accommodation for any pilot.

Picture 13. Fuselage - damaged zone

Picture 15. Unchanged structure of Sova aircraft


Picture 14. Left - mechanical damages of the fitting
Right right skin damages
After inspection, the report has been written. It includes
results of examined structure and requirements for
mandatory repairs. Experts have confirmed, after control
inspection of the structure that all necessary repairs are
conducted in compliance with report requests.
Consequently, life-cycle of the structure has been
extended for 10 years.

3. PROTOTYPE DEVELOPMENT

Picture 16. CAD model of redesigned cabin space

New four-seater airplane was required to perform all of


the training missions. It can be used for reconnaissance,
photo-shooting and other operation depending on
equipment. One special advantage of this aircraft version
equipped with advanced digital displays is that it can be
used for initial training of several trainees, reducing
considerably the cost of the training

Second approach was upgrading of aircraft systems


making the aircraft concurrent on the market; redesigned
primary and secondary control system with electrical trim
and flaps actuators. Pedals in the cabin area are
redesigned and adjustable according to pilot needs.
Completely new fuel system, with new boost pump and
fuel selector with four positions (left, right, both tank and
fourth position for fuel off). There is a new hydraulic pipe
line with the same non retractable landing gear as on
Utva 75. Completely new ventilation system and
electrical air condition unit.

Whole metal structure Utva 75 with a side-by-side seating


configuration is a basis of the new four-seater airplane
Sova. At the same time with extension of a structure lifecycle, the conversion project has been conducted. In a
process of prototype development two main approaches
can be mentioned: modification of the structure and
revision and upgrade of the aircraft systems.

Changes including reinforced structure and mass increase


consequently led to revision in a propulsion. New engine
has been integrated, Lycoming IO-390-A1 B6 (max
power 210 hp) paired with appropriate two blades
Hartzell propeller. It improves aircraft maneuverability
comparing with former Utva 75. Engine mount was tested

Structure revision consists of minimum changes on the


wing and empennage (picture 15) but central area of the
fuselage is reinforced due to two additional passenger
143

OTEH2016

PROTOTYPESOVADEVELOPMENT:AIRCRAFTLYFECYCLEEXTENSION

then slightly modified according to new engine (picture


16).

Picture 18. Sova - Technical demonstrator


Time and costs that have been used in a process of
developing
feasibility
study,
then
designing
modifications, producing elements, mounting them then
integration of new aircraft systems are considerably less
than design and production of a completely new aircraft.
Not to mention confirmation of efficiency and
maintainability of 40 years long operational use of the
Utva 75, and the importance of that pieces of information
in a process of development the aircraft, make this
conversion affordable solution.

Picture 16. CAD model front Sova aircraft section

4. CONCLUSION
After inspection of the Utva 75 structure, life-cycle
extension has been confirmed. Conversion, as a choice of
money-time saving solution, has been performed on the
existing structure. Project included redesign of the central
fuselage part with emphasis on the cabin area
modification. New control and propulsion systems are
integrated. Whole project is supported by modern design
software (CATIA V5) (picture 17).

REFERENCES
[1] Niu,C.Y.: Michael, Airframe structural design, Hong
Kong Conmilit Press LTD., Hong Kong, 1999.
[2] Dimitri,N.,
Mavris,D.,
DeLaurentis,A.:
A
probabilistic approach for examining aircraft
concept feasibility and viability, Aircraft Design,
Volume 3, Issue 2, June 2000, Pages 79-101
[3] Lyle Jr,J.W.: Converting A Citation Business Jet to a
military trainer, Aircraft Design, Volume 1, Issue 1,
March 1998, Pages 51-60
[4] Milutinovic,S.: Konstrukcija aviona, Beograd, 1970
[5] Allen,R.: Aircraft conversions: specialist freighter
conversions work expands, Aircraft Engineering and
Aerospace Technology, 2001, Vol. 73, Issue 6,
[6] Boeing to convert five MD-11s to freighters, Aircraft
Engineering and Aerospace Technology, 2004, Vol.
76, Issue 2
[7] Three-bladed prop conversion for the Piper
Comanche 260C, Aircraft Engineering and
Aerospace Technology, 2003, Vol. 75, Issue 5
[8] Vasic,Z., Blai,., Stefanovi,V.: Reconstruction of
aircraft structure with the aim of optimizing and
extending aircraft life-cycle, OTEH 2012, Belgrade
2012., ISBN 978-86-81123-58-4.

Picture 17. CAD model of Sova aircraft


Sova (Utva 75 A41M), as a conversion of Utva 75, with
modification of cockpit area, controls, propulsion will
result new CS 23 certification (Certification Specification
for Normal, Utility, Acrobatic and Commuter Category
Airplanes).

144

SOME ASPECTS OF THE DIFFERENT TYPES WIRELESS SENSORS


IMPLEMENTATION WITHIN AIRBORNE FLIGHT TEST
CONFIGURATION
ZORAN FILIPOVI
Lola Institute, Belgrade,zoran.filipovic@li.rs
VLADIMIR KVRGI
Lola Institute, Belgrade,vladimir.kvrgi@li.rs
DRAGOLJUB VUJI
Military Technical Institute, Belgrade, dragoljub.vujic@vti.vs.rs

Abstract: Design and implementation of the Flight Test Instrumentation Systems (FTI) is very important for successful
evaluation the testing prototype of aircraft. The configuration of FTI System is based on the General Plan of Testing of
any new aircraft (or significant improvement to an existing aircraft). The system represents a fully integrated approach
to flight test systems which addresses the end-to-end requirements from airborne data acquisition and real time flight
monitoring through aircraft performance and stability/control analysis. Implementtation of wireless sensor networks
(WSNs) communication within airborne FTI configuration or aircraft structural health monitoring system, introduces a
number of challenges such as guaranteeing reliable transfer of the sensor data and time synchronization of the remote
nodes. Special attention has given on the use of Micro-Electro-Mechanical Systems (MEMS) and Surface Acoustic
Wave (SAW) sensors technology. This paper addresses some aspects of WSNs acquisition, the associated challenges
and discusses approaches and solutions to these problems.
Keywords: Flight Test Instrumentation, wireless sensor networks, surface acoustic wave sensors, micro-electromechanical systems, sensor node.

1. INTRODUCTION

The main purpose of the Flight Test Instrumentation (FTI)


System is to acquire data about the operation of the test
vehicle and provide the data for processing during post
flight analyses. For development, verification and
certification phase, aircraft must be equipped with a
complete FTI System which is consisting of three
components:
- Airborne Data Acquisition System (DAS),
- Compatible Ground Telemetry Station (GTS),
- Post Test Data Processing System (DPS).

High environmental acceptability of FTI - equipment


(temperature, vibration, humidity .)
EMC compatibility [2].

The airborne DAS in the early days consisted of pilots


notes, photographs of cabin instrumentation, and basic
sensor data. Todays systems consist of thousands of data
acquisition modules, recording subsystems, and the
associated cabling that is required to support these
systems. And the need for additional sensors continues to
increase as aircraft systems become more sophisticated
and technologies evolve. As a result, FTI systems must be
designed to reduce overall test program time. Moreover,
size, weight, and power (SWaP) are major concerns for
aircraft manufacturers, as demands for more fuel efficient
aircraft rise. A major contributor to aircraft development
(and assembly) timeframe, cost, and overall weight is
cabling. Minimizing the amount of cabling in an aircraft
may considerably reduce its development time, cost, and
weight. Therefore, a wireless sensor network (WSNs)
based airborne FTI system must be small that can be
relatively easy installed in a variety of configurations and
locations on aircraft, stores, or other test articles without
significantly impacting the performance of the system
being tested [3], [4].

Design and implementation of the FTI System is very


important for successful evaluation of the testing
prototype of aircraft. The configuration of FTI System is
based on the General Plan of Testing of any new aircraft
(or significant improvement to an existing aircraft) and
missile [1], [2]. Numerous operational aspects must be
considered as part of the FTI system design like:
- Small in size and open architecture, reduced wiring,
- Easy to install (modularity -adaptable, plug and play),
- Low power consumption,
- Low maintenance, easy to handle (self-testing)
- MTBF(Mean Time Between Failure) very high,
- Optimized data acquisition and data recording,
- Long lasting calibration or no calibration necessary,
- Total costs for installation very low,

This paper describes some approaches to addressing these


challenges and achieving a useful FTI system using
different tapes of WSNs. Many challenges need to be
145

SOMEASPECTSOFTHEDIFFERENTTYPESWIRELESSSENSORSIMPLEMENTATIONWITHINAIRBORNEFLIGHTTESTCONFIGURATION

overcome before WSNs can be effectively and


successfully deployed in FTI, including throughput,
latency, power management, electromagnetic interference
(EMI), and band utilization considerations.

OTEH 2016

applications. The first is to have a wireless link from a


Data Acquisition Unit DAU to sensors.

2. AIRBORNE WSNs FTI CONFIGURATION


The airborne part of FTI system support the measurement
different parameters and its transfer to the GTS using
specific airborne infrastructure consisting of cables,
connectors and signal conditioning electronics. The
introduction of a new hardware element into a non-WSNs
based test system requires any number of these
infrastructure pieces, thereby almost guaranteeing a
negative effect on the SWaP constraints while also
increasing the material cost of the system. Furthermore,
each new physical connection introduces a new potential
point of failure and a maintenance item that requires more
time when checking/auditing the system [1].

Picture 1. Wireless FTI Networks


For the second case, one or more remote DAUs can be
connected via wires to sensors, but connect to the rest of a
airborne DAS by a wireless link. This has, to an extent,
been a strategy that has been used in rotorcraft where a
DAU resides on a rotor hub and is connected using slip
plates to the rest of the system. This still requires a
physical link however and a wireless RF link would
remove the need for such an oft problematic electromechanical solution.

Considering these effects, the integration of WSN based


approaches into instrumentation test systems offers key
benefits. First, WSNs based replacement elements are
inherently designed to require fewer physical connections.
That results in positive effects on the SWaP constraints and
physical robustness of the test system. Furthermore, the lack
of physical connections requires WSNs to be adaptive and
flexible. As a result, introducing new elements into a WSNs
based instrumentation system will require less installation
time, maintenance, and documentation.

Ethernet technology offers numerous benefits for


networked FTI (Systems Integrated Network Enhanced
Telemetry-iNET) such as increased data rates, flexibility,
scalability and most importantly interoperability owing to
the inherent interface, protocol and technological
standardization [1].

Also, WSNs provide new opportunities to extend the


sensing and monitoring capabilities throughout the
lifecycle of a system. The remote accessibility of WSNs
provides a means to calibrate and configure many sensors
and devices without direct physical access. These WSNs
will communicate its data to a central unit installed within
the aircraft (Picture 1). Picture 1 shows a diagram of the
Ethernet based FTI data acquisition systems including
WSNs and a number of the data acquisition units (DAUs).
There are two principal wireless use cases for FTI

The focus of this paper is in some aspects of integration


of WSNs wireless within FTI network. Picture 2 shows a
system diagram of a small onboard WSNs
instrumentation and telemetry configuration.

Picture 2. Onboard WSNs instrumentation and telemetry configuration

146

SOMEASPECTSOFTHEDIFFERENTTYPESWIRELESSSENSORSIMPLEMENTATIONWITHINAIRBORNEFLIGHTTESTCONFIGURATION

The system is comprised of a group autonomous WSNs


spatially distributed that monitor physical or
environmental conditions, such as temperature, sound,
vibration, pressure, motion and so on. These WSNs
transmit the acquired data to a wireless controller module
(WCM) placated in the main airborne DAU including and
the others necessary modules.

OTEH 2016

OLM - Operational Loads Monitoring,


RHUMS - Rotorcraft Health and Usage Monitoring,
SHM - Structural Health Monitoring.

Implementing such wireless communication introduces a


number of challenges such as guaranteeing reliable
transfer of the sensor data and time synchronization of the
remote nodes. The key design challenges for the airborne
instrumentation/telemetry are: remote sensor, battery
selection, frequency study including radio and antenna
selection [3], [4] [5].

The (WCM) provides a self-discovery mechanism for


identifying the sensors in its network, facilitating the
sensor calibration process, programming acquisition
variables in the sensor units, and providing two way
communication with all the sensors in its network. The
installation of the WCM as part of the data system may
require an external remote antenna for communication
with the wireless sensors. The main function of WMC is
to receive wireless data from the sensors, and format the
data in a format required by the host DAU. The WMC
implements command and control with the sensor units,
uniquely addresses each sensor in its network, controls
multiple sensors simultaneously, and provides access
capability to the health status of each sensor unit in its
network. On power up or as required, the controller
programs the sensitivity and sample rate of the sensor
channels. Periodically the sensor will append additional
data to the sensors. This data includes the state of the
discrete sensor inputs, temperature within the sensor
module enclosure as well as other required
housekeeping signals. As required, the controller can
send commands to each sensor under its control,
instructing it to change the logic state of its discrete
output lines [5].

3. WIRELESS COMMUNICATIONS
STANDARD FOR WSNS
There are numerous standards that are used in the
implementation of wireless sensor for aerospace
applications like:

All information about the measured electrical and nonelectrical quantities during the flight testing of the aircraft
is transmitted in real time via aircraft telemetry
transmitter (XTMR-S band transmitters operate from 2.2 2.45 GHz) and the corresponding antennas (2 or 3) that
are positioned in special places on the external structure
of the aircraft.

IEEE 802.11 is a set of standards that operate


primarily in the 2.4 GHz and 5 GHz bands. This
standard is commonly known as Wi-Fi, and widely
used in the office and at home. There are a number of
protocols defined using this standard, currently the
most widely used being 802.11g and 802.11n.

Bluetooth (IEEE-802.15.1) operate in Bluetooth


ranges from 2.4 - 2.483 GHz with 79, 1 MHz wide
RF channels (generally, 2.408 - 2.480 GHz is the
operating range), transmitting up to 1Mbps
(Bluetooth 2.0) or up to 2 to 3 Mbps for Bluetooth
2.0 + EDR (Enhanced Data Rate).

Zigbee (IEEE-802.15.4) is a networking standard


generally used for low data rate, home automation
and industrial control applications such as lighting
control. It is based on the IEEE 802.15.4 radio
standard and supports applications that require
periodic short data transfers up to 250 Kbit/s over
distances to 75 m.

These technologies were evaluated in terms of:


standardization, maturity, bandwidth, range, security, and
network join time.

As available space in aircraft weapon bays, small


weapons, and unmanned vehicles becomes extremely
limited, the miniaturization of remote sensors and
telemetry units becomes critical. The basic requirements
that must satisfy airborne part of FTI are:
- Flexible and modular instrumentation and telemetry
system that can be installed on an aircraft or other
test article without the need for permanent
modifications.
- The ability to remotely activate/deactivate and
reconfigure remote sensors system such as (changing
resolution and/or sample rate) and to interface with a
larger airborne FTI network.
- The FTI equipment must be easily removed after test
completion with minimal impact on the operational
configuration [1], [2].

In terms of power consumption and device size, Zigbee


and the latest version of Bluetooth (2.0) are comparable.
The power requirement of Bluetooth 2.0 devices is half
that of previous versions. Zigbee devices are specified to
operate over distances of up to 100 meters as are
Bluetooth class 1 devices. Bluetooth class 2 devices are
specified for distances up to 10 meters. Both wireless
communication systems have their own security
protocols. Zigbee security is defined by a 128 AES plus
application layer security; Bluetooth security uses 64 and
128 bit encryption. The different security systems must be
evaluated in the aircraft environment. One advantage of
Zigbee devices is their connection time. Zigbee can join
existing networks in less than 30 ms, whereas, Bluetooth
2.0 devices require approximately 3-5 seconds. Zigbee
has the unique ability to receive information from a
device outside the network, even if this receiver is not the
principal controller, and then forward the information
(without reading the data it is transmitting). The device
simply looks at the destination and sends the data on to

WSNs are used not only during the process of testing


prototypes of the aircraft under the complex FTI system
but also in cases of equipping aircraft during their
exploitation with additional acquisition systems in order
to extend its lifetime. For this purposes, there are several
Usage Monitoring programs such as:
147

SOMEASPECTSOFTHEDIFFERENTTYPESWIRELESSSENSORSIMPLEMENTATIONWITHINAIRBORNEFLIGHTTESTCONFIGURATION

the controller. This capability may be useful in situations


where a direct line of sight between an outlying sensor
and controller is not possible. It is one of the reason why
Zigbee has been chosen as a more perspective for further
investigation in FTI WSNs.

OTEH 2016

Wireless sensor unit with integrated transducer and


battery
Wireless sensor unit with external transducer and
uses external power supply (28 V)

In both cases, sensors digitizes and encodes the


measurement data for wireless transmission to a WCM
including and the others necessary modules within FTI.
The sensor node is fully programmable via the wireless
connection from the WCM. The sensor programmable
variables include channel sample rate, gain, offset, and
anti-alias filter cutoff frequency characteristics. All
calibration and signal conditioning will be preset within
the unit. It includes an on-board non-volatile memory for
storage of critical information such as module identifier,
channel setup, transducer calibration data, etc. (In
accordance with IEEE 1451.2, "Standard for a Smart
Transducer Interface for Sensors and ActuatorsTransducer to Microprocessor Communication Protocols
and Transducer Electronic Data Sheet (TEDS) Formats").

4. WIRELESS SENSORS TECHNOLOGY


Wireless sensors are generally divided in 4 categories:
- Active sensors are powered by a battery. They
cannot be chosen for measurements in harsh
environments such as under high humidity and
temperature due to the battery and electronic
components sensitivity.
- Semi active sensors, also called radio-frequency
identification (RFID) are powered by strong ambient
energy such as inductive coupling. They can only
operate at low temperatures.
- Optical sensors can be used to measure temperatures
up to 2000C but they present problems of sensitivity
and dimensions.
- Passive sensors such as acoustic transponders or
wireless electromagnetic resonators do not require a
source of power. The signal backscattering is
performed in a frequency or time domain RADAR
interrogation. They present significant advantages
compared to previous sensing methods and can be
used up to 1300C [6], [7].
Industrial, military, automotive and aerospace
applications require accurate, small size, maintenance free
and low cost sensing systems. Micro-Electro-Mechanical
Systems (MEMS) is the integration of mechanical
elements, sensors, actuators, and electronics on a common
silicon substrate through micro fabrication technology.
MEMS brings together silicon-based microelectronics and
micromachining technology, making possible a smaller,
more highly integrated and lower power transducer than
would have otherwise been possible. Sensor network
nodes are devices that incorporate communications,
processing, sensors and power sources within a small
package. Use MEMS technology enables production of
low-cost, low-power multifunctional sensors having very
small size and light weight. MEMS technology whereby
micro actuators, microelectronics and other technologies,
can be integrated onto a single microchip. The critical
physical dimensions of MEMS devices can vary from
well below one micron on the lower end of the
dimensional spectrum, to several millimeters.

Picture 3. Triaxial accelerometer with built-in transducer


and battery
The sensor unit (Picture 3 includes a digital signal
controller capable of the following actions:
- Generating a data sample rate clock,
- Initiating data sampling,
- Controlling the A/D converter and axis channel
multiplexer,
- Collecting axis channel data from the A/D converter,
- Implementing an IIR filter algorithm to provide the
required data output sample rate and frequency
response,
- Interfacing with the on-board wireless radio [5].
The sensor unit includes provisions to collect auxiliary
data such as sensor temperature and/or the state of two
digital input and two digital output lines. The state of the
discrete input lines is transmitted back to the controller
unit. The discrete output signals represent the state of
discrete input signals at the wireless controller unit.

The WSNs incorporates transducers, signal conditioning,


an acquisition controller, a processor, power (battery), a
wireless radio, and an antenna into a sealed,
aerodynamically shaped, miniaturized package. It is
intended for external installation on a test vehicle via the
use of an electro-cleavable adhesive [4].

The unit also includes on-board antenna tuned to the


Bluetooth band, I/O connector accessible through an
environmentally sealed cover provides access to the
discrete I/O signals and provides access to a switch that is
used to disconnect the battery in order to saves battery
when not in use or access for battery recharging.

4.1. Active wireless sensor with integrated


transducer and battery

MEMS-based accelerometer transducers with capacitors


is typically a structure that uses two capacitors formed by
a moveable plate held between two fixed plates. Under
zero net force the two capacitors are equal but a change in

For WSNs aircraft application can be use two types of


wireless sensors configurations:

148

SOMEASPECTSOFTHEDIFFERENTTYPESWIRELESSSENSORSIMPLEMENTATIONWITHINAIRBORNEFLIGHTTESTCONFIGURATION

force will cause the moveable plate to shift closer to one


of the fixed plates, increasing the capacitance, and further
away from the other fixed reducing that capacitance. This
difference in capacitance is detected and amplified to
produce a voltage proportional to the acceleration. The
dimensions of the structure are of the order of microns.

OTEH 2016

vapor, moisture, and temperature sensors. As a result of


their mass sensitivity, they can be used in numerous
physical, chemical, and biological applications [6], [7].
A basic SAW device consists of two Inter Digital
Transducer (IDT) on a piezoelectric substrate such as
quartz. The input IDT launches and the output IDT
receives the waves. The IDT consists of a series of
interleaved electrodes made of a metal film deposited on a
piezoelectric substrate. The width of the electrodes
usually equals the width of the inter-electrode gaps giving
the maximal conversion of electrical to mechanical signal,
and vice versa. The minimal electrode width which is
obtained in industry is around 0.3 m, which determines
the highest frequency of around 3 GHz. If the electrodes
are uniformly spaced, the phase characteristic is a linear
function of frequency, the phase delay is constant in the
appropriate frequency range. This type of the SAW
device is than called delay line and can be used as a delay
line or as band pass filter. Load impedance causes SAW
reflection variations in magnitude and phase (ZL).

For some applications, wireless sensor unit may require


an external transducer and an external antenna
installation. In this application, the only element installed
on the skin of the test vehicle is the antenna. This
application requires the use of available power near the
sensor unit location, which eliminates the need of a
battery. This is very useful in the case where
instrumentation is installed within a pod and data is
transmitted back to the fuselage of the aircraft (Picture 4).

Picture 4. Wireless sensor unit with external transducer


Picture 5. SAW wireless delay line

This unit will be internally installed outside (missile


launcher), has an external triaxial sensor requiring
excitation power from the unit, uses 28 VDC power, and
uses an external antenna. All other building blocks used in
the unit with the internal transducer are used in this unit.
The transducer for use with this unit can be the triaxial
piezo-electric accelerometer. The WCM is installed as
part of host DAU OF airborne FTI a play the same role as
in WSN with built-in transducer and battery.

Sensing with acoustic waves is based on measuring


variations of acoustic propagation velocity of wave, or
wave attenuation. These variations imply changes in wave
properties (frequency for resonators, delay for delay lines)
which can be translated into the corresponding change of
the physical parameter measured.
In the second type of SAW devices SAW resonators,
IDTs are only used as converters of electrical to
mechanical signals, and vice versa, but the amplitude and
phase characteristics are obtained in different ways. In
resonators, the reflections of the wave from either metal
stripes or grooves of small depths are used (Picture 6).

4.2. Passive surface acoustic wave wireless


sensors
Surface acoustic wave sensors (SAW) are a class of
MEMS systems which rely on the modulation of surface
acoustic waves to sense a physical phenomenon. Surface
acoustic wave (SAW) sensors are used for identification
and measuring of physical quantities such as temperature,
pressure, torque, acceleration, tire-road friction, humidity,
etc. They do not need additional power supply for the
sensor elements and may be accessed wirelessly enabling
the use in harsh indoor/outdoor environments (Picture 5).
They have rugged compact structure, outstanding
stability, high sensitivity, low cost, fast real time response
and extremely small size (lightweight). SAW sensors are
used for identification of moving object and parts (so
called ID tags) and wireless measuring different
parameters. SAW wireless sensors are beneficial when
monitoring parameters on moving objects, such as tire
pressure on cars or torque on shafts. Sensors that require
no operating power are highly desirable in remote
locations, making these sensors ideal for remote chemical

Picture 6. SAW wireless resonators sensor


Typical saw wireless sensing systems include:
- a packaged saw sensor connected to an antenna,
- a separate transceiver connected to one or multiple
antennas or wireless control module (WCU) as a part
airborne FTI system.
The radio frequency transceiver (interrogator unit) sends
an electromagnetic pulse. The association of an antenna
149

SOMEASPECTSOFTHEDIFFERENTTYPESWIRELESSSENSORSIMPLEMENTATIONWITHINAIRBORNEFLIGHTTESTCONFIGURATION

with an IDT can be wirelessly interrogated. The pulse is


converted into a surface acoustic wave (SAW) on the
sensor (piezoelectric effect). Properties of the acoustic
wave will be modified under the effect of the physical
parameter which is sensed (e.g. temperature).The SAW
sensor response transmits these modifications back to the
RF transceiver. The measurement signals are contained in
the SAW transponders high-frequency response signal,
which the reader records and evaluates.

OTEH 2016

prototype of aircraft. Major challenges include


miniaturization of hardware, particularly the remote
sensors, power, and selection of the appropriate
communication protocol and operating frequency. The
challenges in the development of the wireless sensor
system for airborne applications are numerous and
required a systematic approach and careful study in many
areas of discipline. These areas include:
- Transducer Selection,
- Sensor power source related issues,
- Wireless Radio and Antenna (Interference with/from
other aircraft electronic systems),
- Line of sight requirements between sensor and
controller,
- Characterizing data latency in the wireless link,
- Frequency Study (The effect of S band telemetry
transmitters upon the wireless link),
- Network Throughput and Data Budget,
- Environmental qualification of hardware (MIL-STD810 environmental tests including electro-cleavable
adhesive for externally installed wireless sensor
modules),
- Ground tests EMC/EMI qualification of hardware
and software (MIL-STD-461/462) each airborne FTI
components and aircraft with integrated FTI
equipment in the Anechoic Shielded Chamber and
the Open Field Site,
- WSNs data encryption (security),
- Measurement uncertainty [1], [3], [5].

First, the interrogator generates a f0 carrier frequency RFI


Signal modulated by a short rectangular RF pulse during
aT=20 s time interval. If the interrogator frequency f0 is
equal to the resonance frequency of the resonator a
maximum of energy is stored in the SAW sensor.
When the interrogator stops the transmission after the
pulse of T=20s, it switches to the receive mode and then
is able to measure the received energy from the discharge
of the resonator into the antenna.
SAW wireless sensors have unique features and
unmatched benefits such as:
- operate in severe environments strong electromagnetic fields, high acceleration, high temperature,
explosive, corrosive, radiated environments,
- enable measurements on moving and rotating parts,
- need no battery,
- no cable or wire disposal and has feature infinite
autonomy,
- are small and light (e.g.: 5x5mm - 2g),
- large measurement range (low to high temperature
and strain),
- multi-functional measurement (sensor for pressure
and temperature),
- high frequency (from 50 MHz to or 2.45GHz),
- read-out distance of several meters (up to 15m),
- advanced digital transceiver easily connectible to
customer systems through standard interface [7], [8].

A series of ground (in the lab and flight line) and flight
tests after integration of airborne FTI subsystem must be
perform in order to evaluate the whole FTI system and
flight test process of concrete flying object start.

References
[1] Filipovic,Z., Stojic,R., Stojic,T., Vujic,D.: Design
and implementation of modern fligh test
instrumentation system for civilian and military
aplication, 4th International scientific conference on
defensive technologies, ISBN 978-86-81123-40-9,
Belgrade, 67 October, pp. 440-446, 2011.
[2] Filipovi,Z., Darag,S.A., Mohammed,D., Vujic,D.:
The sources of mesurement uncertainty related on
aircraft flight testing, 6th International scientific
conference on defensive technologies, ISBN 978-8681123-71-3, Belgrade, 910 October, pp. 107113,
2014.
[3] Diarmuid,C.: Wireless Data Acquisition in Flight
Test Networks, ITC 2015,
http://hdl.handle.net/10150/596417
[4] Vujic,D., Stojic,R., Filipovic,Z.: Wireless sensor
networks technology in aircraft structural health
monitoring, 5th International scientific conference on
defensive technologies, ISBN 978-86-81123-58-4,
Belgrade, 1819, September, pp. 141-147, 2012.
[5] Pellarin,S., Musteric,S.: Wireless sensor system for
airborne applications, International Telemetering
Conference Proceedings, 2007,
http://hdl.handle.net/10150/604468.
[6] Hribek,M., Risti,S., Radojkovi,B., Filipovi,Z.:

Typical aeronautics application of SAW WSNs depends


on the Program of testing or Usage Monitoring
programs some aircraft (prototype) sash as:
- structural health monitoring of aisles, slats, fuselage,
turbine blades,
- tire pressure monitoring of landing gear wheels,
- fuel/engine oil/hydraulics level monitoring ,
- fire/overheat detection,
- ice detection,
- temperature monitoring on brakes, inside engine, on
rotor,
- pressure monitoring inside engine, turbine,
compressor,
- critical equipment surveillance (on/off alarm) cockpit
door, cabin doors,
- emission monitoring,
- overload detection.

5. CONCLUSION
This paper describes some aspects of the different types
wireless sensors implementation within airborne FTI
subsystem. Design and implementation of the FTI System
is very important for successful evaluation of the testing
150

SOMEASPECTSOFTHEDIFFERENTTYPESWIRELESSSENSORSIMPLEMENTATIONWITHINAIRBORNEFLIGHTTESTCONFIGURATION

OTEH 2016

Wave Sensors in Mechanical Engineering, FME


Transactions, VOL. 38, No 1, (2010) 1118.
[8] Hribsek,M., Risti,S., Radojkovi,B., Filipovic,Z.:
Modelovanje i projektovanje filtara sa povrinskim
talasima i njihova primena u vojne svrhe,
VTG,VOL. 39, No 2, 2011.

Modelling of chemical surface acoustic wave (saw)


sensors and comparative analysis of new sensing
materials, International journal of numerical
modelling: electronic networks, devices and fields,
Int. J. Numer. Model. 26 (2013) 263274,
(wileyonlinelibrary.com). DOI: 10.1002/jnm.1870.
[7] Stojic,T., Hribsek,M., Filipovic,Z.: Surface Acoustic

151

UAS - FROM MINI TO TACTICAL


ADI COHEN
Aeronautics, NAHAL SNIR 10 ST. Yavne, Israel, +972-54-5500972, adic@aeronautics-sys.com

Abstract: Mini UAV's (unmanned aerial vehicles) are growing into scalable surveillance and reconnaissance platforms by
operational range and payloads capabilities. The Aeronautical Orbiter, a compact and lightweight Mini UAV, is presented
in this paper.
Keywords: lightweight mini UAV, Orbiter UAV system, mean features.

High-end payloads for day and night vision

1. INTRODUCTION

Exceptional navigability and safety in harsh


environmental conditions, specifically in strong winds

Aeronautics Ltd is a world leading manufacturer of


unmanned Intelligence, Surveillance, Target Acquisition
and Reconnaissance (ISTAR) solutions. Aeronautics
comprehensive systems provide real time decision making
capabilities such as border protection, maritime Surveillance
and more.

Growth potential allowing heavy payloads and new


extensions
Very low signatures: acoustic, thermal, visual, and radar

3. GRAPHIC OF THE ORBITER FAMILY

Deployed in dozens of countries on five continents, the


world-famous Aeronautics UAS enable the military,
homeland security and law enforcement customers to make
their countries safer.

3.1. Orbiter 2 Mini UAS


The mature and combat-proven Orbiter Mini UAS is part of
Aeronautics Orbiter UAS family. This compact and
lightweigt system is designed for ease of use by the
warfighters and security agents, providing efficient
operational solutions for tactical military and security
applications. The Orbiter 2 platform has proven top
performance and high reliability, providing lifesaving
support in conflict zones worldwide. In the maritime
configuration, Orbiter MUAS provides maritime
surveillance, reconnaissance and target acquisition solution
for small naval vessels operating maritime security and
naval warfare missions.

Since companys establishment in 1997, the Aeronautics


UAS have been the company's flagship products, and have
gained over 200.000 flight hours operational experience.
With the years, Aeronautics has diversified the product
portfolio to other ISTAR solutions, defense electronics and
integrated solutions.
Aeronautics in-house vertical integration capabilities
facilitate rapid delivery of unique and tailored solutions to
customers, and provide excellent growth potential for
platforms and systems.
Fields of activity are: Unmanned aerial systems, aerial
special mission systems, ground & naval systems,
C4ISTAR solutions, HLS and civil applications.

The system and proprietary control software are compliant


with NATO STANAG such as 4586 and 4609.

2. COMPOSITION OF THE ORBITER


The Aeronautics Orbiter (henceforth the Orbiter) is a
compact and lightweight Mini UAV that provides a
complete Intelligence, Surveillance Target Acquisition and
Reconnaissance (ISTAR) for field-deployed units. Designed
for use by infantry, artillery, marine and special forces, and
law enforcement organizations, Orbiter is a comprehensive
and flexible solution, ideally suited for covert military
operations, as well as for homeland security missions, peace
support and commercial operations.
The Orbiter MUAV system offers these capabilities:
High endurance
Long operational range

Figure 1. Orbiter 2 EO/IR

152

OTEH2016

UASFROMMINITOTACTICAL

EO camera with laser designation


Photogrammetring mapping

Main Features

Dual Payloads for day/night operation


Camera-guided flight
Rapid deployment, 7-minutes to launch
Simple assembly, rapid turnaround
Electrical propulsion
Low silhouette, silent, covert operation
Automatic takeoff and recovery
Operational below cloud base, in harsh weather
conditions
Long endurance, extended operational range
Pneumatic Catapult launch, airbag &parachute recovery
Mission autonomy, accurate navigation, with or without
GPS or datalink
Control and monitoring from vehicles
Encryped, digital datalink, frequency hopping

3.2. Orbiter 3 Small Tactical UAS


The mature and combat-proven Orbiter 3 Small Tactical
Unmanned Aerial System (STUAS) is part of Aeronautics
Orbiter UAS family. Orbiter 3 is designed to deliver top
performance with the lightest and most advanced covert
platform available today. The Orbiter 3 platform has proven
robustness and high reliability, providing lifesaving support
in conflict zones worldwide.
Operational for up to 7 hours, over 100 km from its control
station carrying advanced, multi-sensor payloads, Orbiter 3
is designed for ease-of-use at tactical operations.
The system and proprietary control interface are compliant
with NATO STANAG such as 4586 and 4609.

Figure 3. Orbiter 3 EO/IR cooled


Main Features
Highly transportable vehicle mounted system

Figure 2. Orbiter 2 COMM

Tri-sensor payloads for day, night operation under clouds

Orbiter 2 specifications
Wingspan: 3.00 m
MTOW: 10.300 kg
Payload weight: 1.50 kg
Maximal speed: 70 kt
Data link: LOS, up to 100 km

Unique SIGINTcapabilities
Rapid development, 7-minute to launch
Simple assembly, rapid turnaround
Silent, electrical propulsion
Low silhouette, covert operation
Automatic takeoff and recovery

Advanced Image Processing Capabilities


Automatic video tracker
Video Motion Detector (VMD)
Video mosaic composition
D-Roll and image stabilization
Digital zoom and super resolution
H.246 for video streaming

Operational below cloud base, in harsh weather


conditions
Long endurance, extended operational range
Catapult launch, net landing for maritime operation
Mission autonomy, accurate navigation, with or without
GPS or datalink
Control and monitoring from moving vehicles

Maritame

Orbiter 3 specifications

Ready to use kit used on any type of vessel

Wingspan: 4.40 m

No need for flight deck

MTOW: 30 kg

Net landing for maritime operation

Payload weight: 5.50 kg

Payloads

Maximal speed: 70 kt

Stabilized Dual EO/R


Continuous zoom

Data link: LOS, up to 150 km


153

OTEH2016

UASFROMMINITOTACTICAL

Payloads
Stabilized triple sensor: day, night (cooled FLIR) with
laser designation
Continuous zoom
SIGNIT capabilities
Photogrammetring mapping

Figure 5. Orbiter 1K

3.3. Orbiter 4 Small Tactical UAS


Aeronautics introduces the Orbiter 4 Small Tactical
Unmanned Aeral Tactical System (STUAS), part of the
Orbiter UAS family.
With long endurance capabilities Orbiter 4 is designed to
deliver top performance with the lightest, feedable, multimission and most advanced covert platform available today,
for both land and maritime operations.

Figure 4. Orbiter 4 EO/IR cooled


Orbiter 4 specifications
Wingspan: 5.00 m
MTOW: 50 kg

Carryng advanced multi-sensor payloads, Orbiter 4 is


designed for ease-of-use to fit all operational needs.

Payload weight: 12 kg
Endurance: Up to 24 hrs
2 Payloads simultaneosly
Data link: LOS, up to 250 km
Service celling: 18.000 feet
Payloads
Stabilized triple sensor: day, night (cooled FLIR) with
laser designation
Continuous zoom
SIGNIT capabilities
Photogrammetric mapping
Synthetic Aperture Radar (SAR)
Automatic identification system

Figure 6. Orbiter family growth potential

References
[1]
[2]
[3]
[4]
[5]

154

www.aeronautics-sys.com
www.commtact.co.il
Orbiter 2 Mini UAS Broshure
Orbiter 3 Small Tactical UAS Broshure
Orbiter 4 Small Tactical UAS Broshure

SECTION III

Weapon Systems and Combat Vehicles

CHAIRMAN
Professor Momilo Milinovi, PhD
Professor Dejan Mickovi, PhD

A PRELIMINARY DESIGN MODEL FOR EXPLOSIVELY FORMED


PROJECTILES
MOHAMMED AMINE BOULAHLIB
Military Academy, University of Defence, Belgrade, email: m.a.boulahlib@gmail.com
MILO MARKOVI
University of Belgrade, Faculty of Mechanical Engineering Weapon Systems Department, mdmarkovic@mas.bg.ac.rs
SLOBODAN JARAMAZ
University of Belgrade, Faculty of Mechanical Engineering Weapon Systems Department, sjaramaz@mas.bg.ac.rs.
MOMILO MILINOVI
University of Belgrade, Faculty of Mechanical Engineering Weapon Systems Department, mmilinovic@mas.bg.ac.rs.
MOURAD BENDJABALLAH
Military Academy, University of Defence, Belgrade, mourad.bendjaballa@hotmail.com

Abstract: The current paper proposes analytical approaches implemented in a performance-calculation program of
Explosively Formed Projectiles (EFP). The proposed analytical methods, mathematically describe the EFP forming
process aiming to optimize the initial phase of the EFP warhead design. A mathematical model, based on the wellknown theoretical approaches, is accomplished and implemented in a software. The developed software provides faster
analysis of EFP design process and the possibility to test new EFP configurations, in addition to the performances of
already existing ones. The adopted model is tested and validated for several types of EFP warheads according to
available experimental reports. Programs output results such as initial velocity, kinetic energy, axial and radial
deformation energies of liners, are compared with experimental data.
Keywords: explosively formed projectiles, analytical method, physics of explosion, performances software.
number of influencing parameters for this type of
warheads, as the explosive charge, liner form, materials
and cases, are adopted in the analytical method and
implemented in an algorithm for comparative analyses.
Otherwise, the algorithm offers the possibility to directly
export the adopted geometry of EFP warheads into
Autodyn numerical software, from the software package
Matlab, which considerably decreases preparation time.

1. INTRODUCTION
In the field of modeling and warheads' design, based on
EFP principles, only few papers have been written
containing analytical approaches to define the forming
process of projectiles [1-4]. Recently, most papers are
based on numerical approaches [5-10] which determine
the performances and gives detailed models of the
forming process. However, numerical software,
particularly Autodyn, which are often used for detailed
analyses in numerical approaches, require comprehensive
preparation of the expected warhead design, in addition to
their long-lasting process of calculation.

This paper aims to present an improved analytical method


for the EFP performances, based on the integration of two
analytical models presented in papers [1, 2]. The first
model [1] is based on the active explosive charge masses,
which correspond to the charges in grid elements of
explosively propelled liner. The result of this model is to
integrate the explosively process in the initial EFP
velocity calculation. The second model [2] is based on
Gurneys method [11], for the final EFP velocity
estimations. This method particularly uses axial and radial
direction approach of the active explosive charge masses
for each element in liner's grid. These final estimations of
radial and axial plastic energy distributions, as stated in
the paper [2], present invariant expressions to be used for
the form estimations of EFP in the initial phase. These
expressions for radial and axial distributed energies from
the second method are implemented in the first one, in
order to provide two main performances of EFP, one is

Nowadays, the EFP warheads are present in many


systems that expect appropriate modernization and/or
optimization; as artillery submunition, antitank missiles,
mines etc. The current paper presents a software as a
solution to provide the ability to preliminarily design the
warheads, in addition to a further ability to analyze the
adopted design by more precise numerical software,
particularly Autodyn.
The research presented in this paper as well as in papers
[1-5], provide crucial information about the EFP
performances in a short time without the time requiring
comprehensive preparation. In the current paper, a large
157

OTEH2016

APRELIMINARYDESIGNMODELFOREXPLOSIVELYFORMEDPROJECTILES

the EFP velocity and second is the deformation energy


distribution, as the projectile form factor.

mai =

The results of these analytical methods and their


implementation in the algorithm contribute in improving
the accuracy of EFP velocity estimations. This is achieved
by an appropriate augmentation in the number of the grid
elements.

M Ki are the masses of metal cover segments. The idea in


this paper is to verify the analytical method implemented
in software for several types and geometries of EFP, and
compare velocity results with the available experimental
results. The data used for simulation software are
provided in papers [2, 8, 12, 13] and taken as
experimental base given in the appendix (table 3). The
geometry of the considered warheads is shown in fig.1.

The adopted analytical model is based on explosive


masses, energy and momentum balance equations to
estimate initial EFP velocity. By the liners partition in
elementary grid and by accepting that the detonation
pressure of explosive products attacks each particular
element on the grid, the impulses and momentum
exchanges, in addition to the final liners velocity can be
summarized [1]. The semi-spherical liner is divided into n
observed elements that start from its axis of symmetry. In
addition, full cylindrical explosive charge volume is also
divided in the same way. Each element of liner
corresponds to the amount of explosive segment
orientated normal to the surface of the liner and located
above it (figure 1). The initial velocities of these elements
depend not only on their position on the liner, but also on
liners geometry. For further analysis, the following
assumptions are accepted:
Detonation products attack metal liner immediately.
The motion of each discrete element of metal liner is
along the radius of liner and there are no colliding
effects between grid elements.
The constant compressive strain rate along axis is
0zi = const .

Figure 1. Adopted geometry of EFP warhead for


analytical analyses; 1- liner, 2- explosive charge, 3- case,
4- back plate; Input parameters: D-caliber, L-length of
charge, l-starting cone position, 1-thickness of liner
center, 2-thickness of liner edge, 3-thickness of cover,
4=5-thickness of back plate, -angle of cone, R1- outer
radius, R2-inner radius
The final velocity of the EFP is performed by integrating
all absolute velocities of liner elements from the equation
(1), by momentum conservation law. It is given by the
following expression [1, 2 and 11]:

Using previous assumptions and energy balance equation


in detonation process, the velocity of a particular ejected
element on the liners grid, V0i [1] is equal to:

1 3i ; i = 1, 2,..., n
k 2 1 3 + i

(1)

0i M i

V0 E =

Where i = mai / M i , is the loading factor of i-th liners

; i = 1, 2,..., n .

(4)

The differences between kinetic energies of elements


correspond to the plastic deformation work along z-axis
[1, 5, 11] as the relative motion towards liners mass
center. The expression that represents the energy of axial
deformation work is [2]:

The active mass of the explosive mai in the loading


factor, in equation (1), is a fictive explosive mass that
reproduces all effects of energies made by real explosive
masses and covers. Corresponding expression for this
mass is [1],
mi
M Ki M i
(1 +
) ; i = 1, 2,..., n ,
2
M i + M Ki + mi

i =1
n

i =1

element, mai - active mass of corresponding explosive


segment, M i - mass of liners i-th segment, D
detonation velocity and k polytrophic coefficient of
detonation products, usually taken as k=3 [1].

mai =

(3)

used in the cases with (2) or without (3) warhead covers.


Values mi are the masses of explosive segments and

2. ADOPTED ANALYTICAL METHOD AND


ALGORITHM

V0i = D

mi 2
for M Ki = 0
2( M i + mi )

n
ADE = 1
M i (V0i cos ) 2
2
i =1
i = 1, 2,..., n

(2)

M V

i 0E

i =1

(5)

Besides, the second method considers the energy of radial


deformation work which corresponds to the part of kinetic
energy created by radial displacement of the elements.
This radial deformation work is presented by the radial
velocity values of each element, V0i sin i [1, 5, 11] and it

or

158

OTEH2016

APRELIMINARYDESIGNMODELFOREXPLOSIVELYFORMEDPROJECTILES

The algorithm offers the possibility to choose various


EFP configurations by varying the geometries and the
used materials, in addition to the number of segments to
be used, as presented by the block (a) fig. 2. After
approving the 3D visual model of the EFP by the designer
(block (b)), the algorithm computes the masses and the
volumes of each segment referring to the Guldins
theorem and based on the inputs.

is given in [2] as follows:

RDE = 1
2

M (V
i

0i

sin )2 ; i = 1, 2,..., n .

(6)

i =1

The algorithm, shown as a flow chart in figure 2,


significantly contributes in analyzing and evaluating the
affecting parameters on EFP main performances. Using
variations in inputs, the algorithm provides enough
precise output data such as axial initial velocity of EFP
liner as well as kinetic energies distributed in axial and
radial directions.

The volumes of segments for the liner the explosive as


well as the case are calculated in block (c) following the
next steps: firstly, the area of each segment is calculated
by double integral, where the intervals of integration are
defined by the segment boundaries. Secondly, the
distance traveled by each segment is computed in function
of the coordinate of the centroid and the angle of
revolution, which is equal to 2. Finally, the volume of
each segment, which is generated by its rotation about an
axis of revolution, is equal to the product of its area and
the distance traveled by its geometric centroid.
After that, the algorithm calculates the absolute velocity
of each segment using equation (1), in block (e) and
controlled by block (d). Then, the initial velocity of the
configuration as well as the kinetic energies distributed in
axial and radial directions are calculated by equations (4),
(5) and (6) respectively in block (f). Finally, the designer
has the ability to export the configuration into a numerical
software (Autodyn) for more precise study, in block (k).
In case that the results do not respect the system
requirements, the simulation process can be reinitiated
with new inputs.

4. RESULTS AND DISCUSSIONS


In order to investigate the model limitations and to
increase its precision, five simulations are carried on
according to configurations given in [5, 8 and 13], with a
variation in the number of segments.
The obtained results in the table 1 show that the initial
velocity for each configuration converges to a lower
asymptotic bound with the increase in the number of the
segments. The threshold number of segments used for
improving precision is about 70. The velocity beyond that
number reaches its asymptotic minimum and remains
unchangeable with the increasing number of grid
elements.
To validate the analytical model, the simulation results
are compared according to available experimental results
for known geometries of EFP warheads given in [2, 8, 12,
13]. The defined geometries and the characteristics of the
used materials are provided on Table 3 in the appendix.
The experimental values of the velocity of the EFP
warheads, together with the analytically calculated ones
are given on the figures 4 and 5, for each experiment.
Figure 4 represents a comparative analysis of
experimental and analytical results for 8 experiments. For
each experiment, the relative error of the analytical
method compared to the experimental data is expressed as
a percentage. From the presented results, obviously the
relative error is defined from 0.15% to 16.89%, and its
mean value is equal to 10.34%.

Figure 2. Algorithm for calculating EFP performances

159

OTEH2016

APRELIMINARYDESIGNMODELFOREXPLOSIVELYFORMEDPROJECTILES

Table 1. Convergence of selected results for adopted analytical method


N

10

30

50

70

90

110

Units

m/s

m/s

m/s

m/s

m/s

m/s

V0 E 1

2801.23

2787.01

2785.81

2785.48

2785.34

2785.3

V0 E 2

2168.17

2146.51

2143.11

2142.86

2142.58 2142.21

V0 E 3

2052.72

2032.52

2029.97

2029.14

2028.88 2028.55

V0 E 4

2079

2043.41

2042.15

2040.58

2040.29 2040.14

V0 E 5

2178.77

2155.06

2153.75

2152.03

2151.73 2151.73

On the figure 5, the comparative results of 6 experiments


derived with the same geometry but with different case
width, are presented. The results show quite small values
of the relative error that varies from 2.42% to 14.12%,
with a mean of errors equal to 8.2%.

The preliminary analytical results, shown in Figures 4 and 5,


are obtained based on the assumption that the process of
formation is completely ideal, whilst in reality, this is
unlikely to happen. For that reason, a correction factor Kc is
introduced to precisely take into account all the
imperfections in the experimental data. It is inserted as an
additional member in the equation (4), which has the task to
correct the value of the velocity calculated by the analytical
model. Then the equation (4) becomes as follows:

experiments vs. analytical 1

4000
experiment
analytical method

3500

14.97%

3000

0i M i

16.81%

velocity [m/s]

12.5%

2500
3.3%

6.88%

V0 E (c) =

11.21%

0.15%

K c i =1n

2000

; i = 1, 2,..., n .

(7)

i =1

16.89%

1500

With a specific correction factor, new obtained results are


presented in Figures 6 and 7. To review the result in a
compact way, the diagrams are organized so that the
previous preliminary results are presented together with
the new adjusted results given by equation (7).

1000
500
0

3
4
5
6
number of experiments

8
experiments vs. analytical 1

4000

Figure 4. Experiments vs. analytical results, where each


number of experiment is taken, for No.1, No.2, No.3,
No.4 and No.5 from [8], No.6 from [5], No.7 and No.8
from [14]

3500

experiment
18.37%
analytical
analytical with correction

3000

8.09%
12.14%

2500

2000

experiment
analytical method
5.74%

7.82%

9.78%

11.15%

velocity [m/s]

changing case experiments vs. analytical 2

12.14%

2.42%

2500
7.24%

2.6%

14.76%

4.14%

2000
1500

12.21%

velocity [m/s]

1000
1500

500
0

1000

3
4
5
6
number of experiments

Figure 6. Experiments vs. analytical results with


correction factor, where each number of experiment
corresponds to the one on figure 4

500

2
3
4
number of experiments

From Figure 6, seemingly and without doubt, the new


velocities are slightly lower compared to previously
obtained ones. The relative error of the adjusted analytical
model compared to the experimental data varies from
2.6% to 18.37%, with mean value of 9.94%.

Figure 5. Experiments vs. analytical results, where each


number of experiment is taken, for No.1, No.2, No.3,
No.4, No.5 and No.6 from [13]
160

OTEH2016

APRELIMINARYDESIGNMODELFOREXPLOSIVELYFORMEDPROJECTILES

show relationship between the errors in estimating the


velocities and coefficients kDE and Kc.

changing case experiments vs. analytical 2

2500

velocity [m/s]

2000

1.51%

3.51%

5.39%

6.7%

Table 2. Axial and radial deformation energy as well as


ratio coefficient

7.66%

1.67%

No.1
No.2
No.3
No.4
No.5
No.6
No.1
No.2
No.3
No.4
No.5
No.6

1500

1000

500

experiment
analytical method
analytical with correction
1

2
3
4
number of experiments

Figure 7. Experiments vs. analytical results with


correction factor, where each number of experiment
corresponds to the one on figure 5

Fig. 4.
Fig. 4.
Fig. 4.
Fig. 4.
Fig. 4.
Fig. 4.
Fig. 5.
Fig. 5.
Fig. 5.
Fig. 5.
Fig. 5.
Fig. 5.

ADE [J]
42259.62
33619.82
31495.86
31515.08
15157.18
49456.26
76846.92
77981.81
79818.56
81620.08
83251.63
84701.13

RDE [J]
30524.52
26849.15
24238.58
27256.37
25577.2
15676.72
46619.98
53368.5
57658.22
60652.4
62872.33
64588.93

kDE
1.38
1.25
1.30
1.17
0.60
3.15
1.65
1.46
1.38
1.35
1.32
1.31

5. CONCLUSION

The analytical method does not take into account the


imperfections in the formation process, thus referring only
to this fact lead the losses, so the analytical method
generally provides higher velocity values compared to the
experimental ones. Some experiments, though, as No.2,
No.5 and No.8 in Figure 4 show greater values of the
measured velocities than the analytically obtained ones,
which gives greater relative error as a percentage in figure
6. This illogical results are due to a lack of information on
the characteristics of the materials used, especially
explosives, as well as certain dimensions. In order to
make a comparison, the values that are missing have been
approximated which induces in some cases to illogical
results, which contributes to enlarge the relative error.
Although some results are illogical with greater error,
they do not exceed a reasonable limit, no more than 20%,
which is quite acceptable for analytical models.

Based on the research and the results obtained in the


current work, it can be concluded the following:
The analytical method implemented in the program
aims to increase the accuracy of the results by
enlarging the number of the used elements. It is
obtained that the convergence of the method is
achieved for n = 70 elements taking into account the
14 observed configuration, where for n> 70 no
considerable increase in the accuracy of the results can
be provided.
By using all the 14 experiments, the mean value of the
relative error of the analytical method without
correction factor is 9.41%, while using a correction
factor decreases the mean error to 7.57%.
Generally, the adopted analytical method gives
acceptable results referring to the experimental data.
Further research can be orientated to the influence of
the axial and radial energy distribution on the
projectile form which can help designing new semiempirical method for more precise preliminary
estimations of EFP parameters.

The corrected analytical model shown in Figure 7


provides a slightly better results of the velocities
compared to the previous model, in the range of 1.67% to
7.66%, with a mean value of errors equal to 4.1%.
To better understand the forming process, the energy of
axial and radial deformation work is calculated by eq. (5)
and eq. (6). The results are presented in Table 2, these are
the axial ADE and radial RDE deformation energy in
addition to their ratio coefficient kDE. This coefficient
determines the expected final projectile form; it varies
generally between 1 to 1.7. For materials as aluminum
this coefficient is less than 1, while for experiments
presenting lager liner radius (experiment No.6), kDE is
extremely large, more than 3. Further researches have to

Appendix
The presented results are obtained according to available
experimental results given in [2, 8, 12, 13], for known
geometries of EFP warheads. For each type of warheads,
the geometric parameters that present the input data for
the program are defined in Figure 1. In addition to the
defined geometry, the characteristics of the used materials
are also provided in table 3.

161

OTEH2016

APRELIMINARYDESIGNMODELFOREXPLOSIVELYFORMEDPROJECTILES

Table 3. Dimensions and materials of the selected experiments defined by figure 1

ACKNOWLEDGEMENTS

[7] Pappu, S, Murr, L.E.: 2002. Hydrocode and


microstructural analysis of explosively formed
penetrators, Journal of materials science, Vol.37, pp.
233-248.
[8] Hussain, G., Hameed, A., Hetheringtom, J.G.: 2013.
Analytical performance study of explosively formed
projectile, Journal of applied mechanics and
technical physics, Vol.54, No.1, pp.10-20.
[9] Fedorov, S.V., Bayanova Ya. M., Ladov, S.V.: 2015.
Numerical analysis of the effect of the geometric
parameters of a combined shaped-charge liner on
the mass and velocity of explosively formed compact
elements, Combustion, Explosion, and Shock Wave,
Vol. 51, No. 1, pp. 130-142.
[10] Hussain, G., Malik, A.Q., Hameed, A.: 2011.
Gradient Valued Profiles and L/D Ratio of Al EFP
With Modified Johnson Cook Model, Journal of
Materials Science and Engineering 5, pp. 599-604.
[11] Bender, D., Corleone, J. 1993. Tactical Missile
Warheads -Explosively Formed Projectile,
American Institute of Aeronautic and Astronautic,
Washington.
[12] Teng, T.L., Chu, Y.A., Chang, f.a., Shen, B.C.:
2007. Design and Implementation of a High Velocity
Projectile Generator, Fizika Goreniya I Vzryva, Vol.
43, No. 2, pp. 233-240.
[13] Weimann, K.: 1993. Research and development in
the area of explosively formed projectiles charge
technology, Propellants, Explosives, Pyrotechnics,
Vol. 18, pp. 294-298.
[14] ivkov M.: 1984. Forming process of Explosively
Formed Projectile With Small Explosive Caliber and
Applications in Penetrating The Armor, Symposium
on Explosive Materials, pp.251-262, Uice.

This research has been supported by the Ministry of


Education, Science and Technological Development of
the Republic of Serbia, through the project III-47029, in
the 2016 year, which is gratefully acknowledged.

References
[1] Orlenko, L.P.: 2004. Fizika vzriva, (in Russian)
Glavnaja redakcija fizicesko matematiceskoi
literaturi, Moskva.
[2] Sharma, VK, Kishore, P., Bhattacharyya, AR,
Raychaudhuri, TK, Singh S.: An Analytical
Approach for Modeling EFP Formation and
Estimation of Confinement on Velocity, 16th
International Symposium of Ballistics, San
Francisco, 1996, pp.565-574.
[3] Markovi, M.: 2011. Explosively Formed Projectiles,
MSc Thesis, (in Serbian), University in Belgrade,
Mechanical
Engineering,
Weapon
Systems
Department.
[4] Markovic, M., Milinovic, M., Elek, P., Jaramaz, S.,
Mickovic, D.: 2014. Comparative approaches to the
modeling of explosively formed projectiles,
Proceedings of Tomsk State University, Serie
Physics and Mathematics,V.293.
[5] Markovi, M., Elek, P., Jaramaz, S. Milinovi, M.,
Mickovi, D.: 2014. Numerical and analytical
approach to the modeling of explosively formed
projectiles, 6th International Scientific Conference
OTEH 2014, October Belgrade, pp. 9-10.
[6] Markovic M., Milinovic M., Jeremic O., Jaramaz S.:
2015. Numerical modeling of temperature field on
high velocitiy explosively formed projectile, 17th
Symposium On Thermal Science And Engineering
Of Serbia, October 2023, pp.175-181.

162

TENDENCIES OF DEVELOPMENT OF AMPHIBIOUS ASSETS


IN ARMED FORCES OF NATO COUNTRIES
NENAD KOVAEVI
Military academy, Belgrade, www.inz.84kula@gmail.com
NENAD DIMITRIJEVI
Military academy, Belgrade, www.neshadim@mts.rs

Abstract: The paper presents a brief overview of modern developments and directions of further development of
amphibious means of overcoming water obstacles in armed forces in the countries of North Atlantic Treaty
Organization (NATO). In preparing the paper was a big problem later literature, and are therefore largely used data
from the Internet. Knowing amphibious assets of overcoming water obstacles in armed forces of NATO countries can be
seen in the purposeful way the effects of their use, and the need for innovation and investment in their resources.
Keywords: amphibious assets, armed forces, literature.
In Armed forces (AF) of NATO countries use two types
of resources for overcoming water obstacles, namely: an
establishment and meat assets. Under meat assets means
assets that are not in the formation of units (units nor
carry with them) are already in place or in the zone,
district, crossing water obstacles in social or private
property, and have the same purpose as the military posts
resources.

1. INTRODUCTION
The phrase "place crossing water obstacles", in principle,
we call a part of a certain type of water hazard, coast and
hinterland on its own and the opposite bank, to be used in
order to be able to provide a smooth and continuous
supply of people and moving vehicles of all categories,
over any kind of water barriers (rivers, canals, lakes). The
room size for a place crossing is different, considering the
type of crossing organized, conditions of access and exit
roads to the river, forestation and conditions coast
masking units, the number of funds to be crossing site to
cross the river and a number of other factors.

In an establishment of water assets for overcoming


obstacles to the AF of NATO countries are:
amphibious transporters,
landing crafts,
scaffolds (light scaffolding, heavy self-propelled
amphibious staging, the staging of the pontoon parks),

In order to successfully overcoming water obstacles, they


are familiar with the characteristics that affect the
planning, organization and execution of the procedure.
These characteristics are: width, depth, flow velocity,
slope, coast, composition floor, height coast, access roads
and wooded shores. In tactical terms, the expression
crossing over the river means the overcoming of the river
by the units of the armed forces. For this, we should
distinguish between two ways of overcoming the river,
namely:

hovercrafts,
mechanized bridges (heavy and light),
sets pontoon parks,
folding bridges (,,Bailey") and
varnishes pedestrian bridges.
The work presents a systematization of existing, and the
tendency of further development of amphibious assets in
AF of NATO countries. The first part deals with the
historical genesis of the use of amphibious
assets. Systematization of amphibious assets in AF of
NATO countries is given in the second part, and the
directions of further development of amphibious assets in
AF of NATO countries discussed in the third part of the
paper.

crossing a river when the dam opposite shore is not the


enemy forces and
violent crossing rivers (forcing the river) when the
opposite shore defending enemy.
Throughout history, armed forces have crossed the water
hazard that could joined in battle or retreated.
Overcoming barriers to water as an element of engineer
operations and counteract in modern warfare has retained
the title of one of the most difficult tactical actions,
despite the fact that they invested enormous financial
resources in the development of equipment and resources
to perform the same.

2. HISTORICAL ORIGINS OF THE USE OF


AMPHIBIOUS ASSETS
The first known self-propelled, amphibious vehicle
powered with steam, presented the American inventor
163

TENDENCIESOFDEVELOPMENTOFAMPHIBIOUSASSETSINARMEDFORCESOFNATOCOUNTRIES

Oliver Evans (Oliver Evans), 1805, under the name


,,Orukter Amphibolos". A year later the French combine
structural elements of the ship and passenger vehicles and
create a precursor to modern amphibious vehicles.
However, at that time, designers are not adequately
understood, however, for further development and
improvement of amphibious assets had to wait more than
a century.

OTEH2016

On the one hand, the countries of NATO (led by the


USA), the focus in the development of these resources to
focus on the development of amphibious characteristics of
military equipment and the development of assault
bridges.
On the other hand, the countries of the Warsaw Pact (led
by the USSR), the focus in the development of resources
for overcoming water obstacles are mainly focus on the
development of pontoon bridges and amphibious assets of
transport weapons and combat system.[2]

The importance of rapid and safe water obstacles to


overcome first realized the Germans, and for this purpose
to construct the first true amphibious vehicle in the world
in general, just before the Second World War. It was to
,,volkswagen" floating vehicle (VW-Schwimmwagen),
made on the Volkswagen chassis civilian passenger
vehicles.

3. EXISTING AMPHIBIOUS ASSETS IN AF OF


NATO COUNTRIES

Vehicles of this type were equipped with SS troops and was


made entirely mechanical. Basic tactical and technical data
vehicle: weight: 1362 kg, length: 3,825 m, height: 1,615 m,
speed on the water: 10 km/h speed on dry: 80 km/h, the
crew: 4 soldiers, armed with machine gun 7.92 mm MG
34.[1] The vehicle is shown in Picture 1.

In order to better understanding of progress in terms of


development and innovation on the amphibious assets in
AF of NATO countries, first it is necessary to
systematization of the existing ones. Accordingly
structural solutions, amphibious assets are divided into:
amphibious combat vehicles,
engineering resources and amphibians
floater.
Combat amphibious vehicles are amphibious assets whose
primary purpose is to perform combat tasks, and with it
the amphibians, the most characteristic representative of
the amphibious infantry combat vehicles. These assets
have the advantage in a violent river crossing in the forces
of the first wave primarily due to the protection provided
by the units in a violent crossing the river and their
autonomy in movement and the possibility of taking the
fight out of the vehicle and create a bridgehead on
enemy`s coast.

Picture 1. ,,Volkswagen" floating vehicle

Engineering amphibious assets are whose main purpose is


to combat systems unit and a water obstacle to overcome
in a short period of time, the representatives of these types
of amphibians agents are: amphibious scaffolding and
conveyors. This type of funding is primarily intended for
the transport of all types of tanks and self-propelled
weapons and vehicles, but also the people and personal
property in order to provide units that are solving certain
water hazard.[2]

In addition to the amphibious means the then Germany, in


order to prepare for the conduct ,,blitzkrieg" (a rapid pace
of war), the weapons introduced a whole series of new
means of overcoming water obstacles, starting with the
pneumatic and assault boats through to tanks, amphibious
transporter beams bridge and floating tanks. Faced with
the problem of coping with the large rivers in the USSR,
Wehrmacht forces have also attached great importance to
the modernization and development of pontoon bridges.
From sets of bridge load capacities from 3000 to 5000 kg,
with the beginning of the war, the weapons have already
been introduced in 1943, sets the bridge, whose capacity
enabled the successful transfer across water obstacles
almost all military equipment. [2]

Hovercraft are assets that are in their infancy, and they


actually reflected the future in terms of resources for
overcoming water obstacles. So far there are only arms
USA.
AF of NATO countries leading in the development and
specialization of research of all types of amphibious
assets. Primat in this sphere of development and
modernization of arms and military equipment are run by
the armed forces of the USA and the United Kingdom
(UK). Consequently all the other members of NATO
amphibious assets use models who are actually a modified
version of the funds that are used in the above mentioned
countries. [3]

The exponential development of science and technology


has always been subordinate to military objectives and
needs. By the end of the eighties of the 20th century was
dictated by the US and the USSR, as leaders of the two
military-political blocs (NATO and the Warsaw Pact). As
a result, in this period, and runs into two different
approaches to solving the problem of overcoming water
obstacles.

164

TENDENCIESOFDEVELOPMENTOFAMPHIBIOUSASSETSINARMEDFORCESOFNATOCOUNTRIES

Specifications of the transporter: crew: 2 + 10 men, armed


with a machine gun 7.62 mm, weight: 19000 kg, the
relationship between engine power and vehicle weight
12.47 kW/t, length: 6.83 m, width 2.98 m, height: 2.30 m,
maximum speed: onshore 105 km/h, water 10 km/h.

The division of the types and designs of certain


amphibians agents that are in operational use in AF of
NATO countries, is given in the table 1.
Table 1. List of amphibious assets in NATO lands
No.

Armed
Forces

1.

USA

2.

UK

3.

French

4.

Germany

5.

Turkey

Types of amphibious assets


Combat
Engeener
Hovercraft
amphibious
amphibious
vehicles
assets
Models
Bradley M2/M3,
LAV 25, BV
206S,
AAV7-A1,
LAV, LCACFuchs M93A1
LVT7
27, LCAC-74
Fox,
Mowag Piranha I
8x8
FV 432 ARS, BV
206S, Fuchs
Viking BVS 10
LAV
M93A1 Fox
VAB ARS,
EFA,
LCAC-27,
Panhard VBR,
Gillois-EWK
LCAC-74
AMX 10RC,
BV 206S, Luchs,
Fuchs M93A1
Fox, Fuchs
M-2 i M-3
M93A2, Fuchs
Panzer
AIFV, ACV S

Gillois-EWK

OTEH2016

3.2. Amfibious armored vehicles ,,Viking


(BVS10)
,,Viking" is an amphibious all terrain vehicle. Is a
modified version of an armored personnel carrier ,,BV
206S. It's actually a kit consisting of two vehicles in
which is included a system for steering. It is located
within the Marine commandos UK. Shielded the steel
construction. The construction is made to reduce radar
reflection. Armor cant break a grain of 7.62 mm from a
direct hit, or shrapnel from artillery projectiles caliber 152
mm that fell no closer than 10 m from the vehicle. He
cant damage it by pressure on a mine, which in itself is
no more than 0.5 kg of explosives. [5]

In the following is presented by a representative of all


types of amphibious assets.

3.1. Armed transporter ,,Fuchs M93A1 Fox


German factory ,,Henschel Defense Systems", developed
in 1960 by ,,Fuchs for the purpose AF former West
Germany. Between 1979 and 1986 produced 996 of these
transporters. Since then, production has continued to
foreign markets, mainly in variants for special purposes,
such as for NBC reconnaissance. During the Gulf War
(1990 - 1991), the same version of the vehicle was
delivered to Israel, UK and USA. [4]

Picture 3. amphibious armored vehicle ,,Viking


(BVS10)
Of the weapons in possession of heavy machine gun 12.7
mm caliber machine gun 7.62 mm and antitank system
,,Milan". ,,Viking can overcome a water obstacle to the
depth of 1.5 m without preparation to sail. The crew
consists of a driver and 3 equipped marines, with a second
cabin can accommodate 8 fully equipped Marines. [1] A
vehicle is shown in Picture 3.

Transporter is sailing with two propellers. Standard


equipment includes a system for NBC protection and
passive equipment for night vision. A more modern
version of the vehicle ,,Fuchs is in almost all the units of
AF of the Federal Republic of Germany and is equipped
with stronger armor and a 7.62 mm machine gun mounted
on the front of the roof carriers. [1] Only the conveyor is
provided in picture 2.

Tactical and technical characteristics: combat weight:


10600 kg, the maximum speed on the mainland: 50 km/h,
the maximum speed for navigation: 5 km/h, driveline:
excavator, drive 4 tracks, maximum autonomy of 300 km.

3.3. Hovercraft
Hovercraft are assets of the latest technology and have
found wide application in overcoming water obstacles.
There were the seventies of the 20th century, during the
war in Southeast Asia. Then they are used only in the
coastal part of the sea and large river basins. The basic
principle that work is the movement of air cushion.
Tactical and technical features: displacement (full):
535000 kg, the crew of 27 people, length: 56.2 m, width:
22.3 m, the speed on the road: 85 km/h, the speed on the
water: 60 knots, engine: gas turbine [6]. Hovercraft set is
shown in Picture 4.

Picture 2. Armored Fighting Vehicle


,,Fuchs M93A1 Fox

165

TENDENCIESOFDEVELOPMENTOFAMPHIBIOUSASSETSINARMEDFORCESOFNATOCOUNTRIES

OTEH2016

on the development and a new type hovercraft - ,,armored


hovercraft" that are primarily intended for the transport of
the first combat echelon unit during execution desanta sa
broda na obalu. The planned maximum speed on land
would be around 100 km/h, and water up to 120 km/h.
Appearance hovercraft ,,LAV" is shown in Picture 5.

Picture 4. Hovercraft

4. DIRECTIONS OF FURTHER
DEVELOPMENT OF AMPHIBIOUS
ASSETS IN AF OF NATO COUNTRIES

Picture 5. USA and UK hovercraft


The asset would run strong motors that would ensure
floater elevation above the surface. This asset would have
another important characteristic, which is the ability to
overcome obstacles mines hovering above them, which is
very convenient, in terms of maintaining the pace of
descent.

In view of the further development of amphibious means,


OS NATO countries are working on two fronts:
modifications (modernization) of existing assets and
development of new types of hovercraft.
The amphibious assets, although proved to be very
efficient and reliable, modern defense industry the
majority of AF in NATO countries are planning to be in
the next 20 to 30 years in part (in some countries
associations, planned and completely) withdrawn from
operational use and to take their place floater. However,
some NATO countries, according to the characteristics of
their territory and waters, and still attach great importance
to the development of amphibious means, especially for
the transport of combat systems.

AF of NATO countries give great importance to the


development of resources for overcoming water obstacles.
The further development of these are aimed at the
construc-amphibious assets with a focus on meeting the
following conditions:

Similarly, developing AF of Russian Federation with


amphibious vehicles from the eighties of the 20th century,
the BAZ ( )-5921 i
BAZ-5922, where the platforms of these vehicles placed
missile systems of high precision of target 9K79 ,,Point"
(constructor Sergey Pavlovich Nepobedimov) with
launchers to launch one and two rockets, according to
NATO codification system known as the SS-21 ,,Scarab".
Modernisation of these types of amphibious assets started
in 2007, the most has been done in terms of making armor
amphibian plastic and composite materials with the intent
to significantly reduce the ingress of water into the body
of amphibians in case it gets affected. [7]

to meet the requirement in terms of maintenance and


operation.

the necessary resources that range great speeds,


they have low weight,
to have the chance to capacity,
they were shielded and

5. CONCLUSION
Based on the above findings, we conclude that the
tendency of further development of amphibious assets in
AF of NATO countries will be focused on the use of new
lightweight materials in the construction of amphibious
means of overcoming water obstacles. The materials must
be lightweight, but also sturdy and able to withstand high
loads, and to provide protection to crews, commonly used
composite materials. From amphibious assets primarily
seeks to be folded out of scaffolding and bridges different
capacity with two or more of crossing roads.

As we already said hovercraft are assets of the latest


technology, but the main problem of the first hovercraft
was that they were very slow and had the possibility of
small versus large capacity of its own weight. Then they
are used only in the coastal part of the sea and river basins
multiple.

However, the designers of amphibious assets in NATO


countries remains unresolved problem to find a way to
combat parts of joint units do independent of the
amphibious assets and pontoon materials and to move
under its own power over water obstacles. On this basis in
the coming years special place in the further development
of military equipment intended for innovation in terms of
constructing light and heavy armored amphibious
personnel carriers, self-propelled anti-tank weapons and
amphibious tanks capable of underwater driving. This

Data on new types of hovercraft US Armed Forces have


not yet been published, so it is about them can be found
only images. It is only known to be already working on a
modernization of existing assets - hovercraft type
,,LAV. Also, AF of USA and UK, in addition to the
already existing co-operative model, is currently working
166

TENDENCIESOFDEVELOPMENTOFAMPHIBIOUSASSETSINARMEDFORCESOFNATOCOUNTRIES

funding will also allow: a fast passage across water


barriers, protect the people, but will also have the
possibility of an active and strong effect at the enemy.

OTEH2016

[3] Milojevi,D.: Tendecies of development of river


crossing assets in modern armies, New messanger of
Serbian Armed Forces, 1 (1) (2010) 125.
[4] http://desant.com.ua
[5] http://www.army-technology.com
[6] http://worldweapon.com
[7] Kovaevi,N., Dimitrijevi,N., Babi,B.: Tendecies
of development of amphibious assets in armed forces
of the Russian Federation, ICDQM, Prijevor-aak,
2016, Jun 29-30, 552-559

References
[1] Babi,B., Kovaevi,N.: Influence of amphibious
assets on environment, Risk and safety engineering,
Kopaonik, 2013, February 02-06, 15-22
[2] Kovaevi,N., Lazi,G.: Amphibious assets of armed
forces of NATO countries, Russian Federation and
Democratic Republic of Chine, Military Technical
Courier, 63 (1) (2015) 144-168

167

ON ALGORITHM OF SYNCHRONIZED SWARMING AGAINST AN


ACTIVE THREAT SIMULATOR
RADOMIR JANKOVI
Union University School of Computing, Belgrade, rjankovic@raf.edu.rs
MOMILO MILINOVI
Faculty of Mechanical Engineering, Belgrade, mmilinovic@mas.bg.ac.rs

Abstract: The simulator algorithm of the system consisting of a group of armed mobile platforms, which in defence of a
territory against an active threat uses the synchronized swarming tactics, has been presented. The synchronized
swarming concept has been introduced in order to prevent the active threat to destroy the swarming platforms one by
one, reducing the probability of success of the swarm as a whole.
Keywords: swarming; armed mobile platform; synchronization; simulation; algorithm.
been developed. System activities have been presented by
time delays. The AMP group swarming in the model is
taking place in the battlefield represented by 2-dimensional rectangle coordinate system (Figure 1). In the model
are moving:

1. INTRODUCTION
Swarming [1] is a tactics by which military forces attack
an adversary from many different directions, and then
regroup. Repeated actions of many small, manoeuvrable
units are going on, circling constantly through the
following four phases of swarming:

Disperse deployment of units in battlefield;


Gathering (concentration) of many units on
common target;
Action at a target from all directions;
Dispersion of units.

Units of the group (AMP-i, i = 1, 2N);


Targets threat units (P-j, j = 1, 2M);
Command information system (C4I) messages.

In the time instant t = 0, AMP-i are in the initial


deployment in the battlefield, and are characterized by
their maximal velocities (Vi), efficient ranges of their
weapons (RW-i) and their individual possible effects on the
target threat unit P-j (Uij).

The swarming tactics is applied by much smaller units,


but their use is far more efficient, so in their actions as a
whole, they can often defeat many times superior
adversary [2].

In regular time intervals t, AMPs receive from C4I


system information on threat units (P-j) and other AMP-i
movement, and at the same time give the reports of their
own current positions.

Although numerous examples of successful swarming


application have been recorded in history [3], the
significance of this tactics did not reach its full measure
until our days, due to brisk development of information
technologies and merging of computing and
telecommunications.More intensive military swarming
research began after 2000, and attained first results
mainly in the areas of unmanned vehicles (in the air,
underwater and on the ground), air force, navy and some
special ground forces units [4], [5], [6] and [7].

Y
NMP-4(0)

4(2t)

Cilj(0)

4(3t)

C(t)
C(2t)

4(4t)

C(3t)
3(4t)
C(4t)
1(4t)
2(4t)
C(5t)

For small countries and their armies, as it is the case with


Serbia, one of the best investments could be the
adaptation of parts of their armed forces, especially
armoured and mechanized units (AMU) for swarming
tactics application [8]. That is the motivation for the
AMU swarming tactics research at Union University
School of Computing and at Belgrade Faculty of
Mechanical engineering, which is, according to our best
knowledge, the first research of the AMU swarming
tactics. So far, the discrete events simulation model of the
armoured mobile platforms (AMPs) group swarming has

1(3t)
V1(2t)
1(2t)
V1(t)
1(t)

2(3t)

3(3t)

3(2t)
3(t)

C(6t)
C(7t)

NMP-3(0)

C(8t)
C(9t)

2(2t)

C(109t)
2(t)

V1(0)

NMP-1(0)

4(t)

NMP-2(0)

B
X

Figure 1. Swarming: group of 4 AMPs against 1 threat


168

ONALGORITHMOFSYNCHRONIZEDSWARMINGAGAINSTANACTIVETHREATHSIMULATOR

According to those information, AMP-i are directed to


threat units and are moving towards them, with the aim to
as soon as possible take positions allowing a successful
swarming for destroying, disabling or hampering them in
accomplishing their mission.

OTEH2016

for dynamic checkout of swarming success criteria in the


course of simulation.

In order that AMP-i participating in swarming could act


against threat P-j, the following conditions are to be
satisfied:
AMP-i disposes the weapon W-i compatible with threat
P-j.
The distance between AMP-i and threat P-j should be
within the effective range of the weapon W-i:
Dij = ( y j (t ) yi (t ))2 + ( x j (t ) xi (t ))2 DW i

(1)

The previous two conditions should be satisfied by


enough number of AMPs, so that their cumulative
effect, KUj, is greater or equal to critical threshold of
multiple AMPs effect, PKUj, which is characteristic of
the threat unit Pj:
N

KU j =

A K
ij

ij

U ij PKU j

(2)

Figure 2. Synchronization of swarming AMP-1, 2, , 8


against threat P-j

i =1

Where:

They represent the basis for further development of this


class of simulators, including those for the synchronized
swarming of a group of AMPs against one or more active
threats.

Aij: is a coefficient (0/1) of AMP-i to P-j previous


assignment in multiple target swarming models;
Kij: is a compatibility coefficient (0/1) of the weapon
W-i with the threat P-j.

In some real situations it is possible that a threat unit


ignores the swarmers activities and deals only with own
mission accomplishment. It happens in cases of a huge
supremacy of a threat over a number and type of
swarmers. However, a threat more often defends itself of
a swarm, trying to destroy AMPs one by one, so
hampering their gathering in a number which could
provide them successful swarming. One of possible
solutions for successful swarming of a group of AMPs
against an active threat is to include the synchronization
concept within the swarming tactics.

Uij: is a possible effect of the weapon W-i against the


threat P-j.
The relation of the threat P-j with the participants of the
swarm can be passive or active.
A passive threat is dedicated exclusively to its own
mission accomplishment and ignores the swarmers. If in
its movement such passive threat succeeds to reach its
assigned final position, and during that time the
cumulative effect of the swarming participants never
exceeds its critical threshold defined in expression (2), the
threat has accomplished its mission, and the swarming
tactics against it has failed.

In this paper the synchronization concept is given first,


and then the algorithm of the synchronized swarming of
an AMPs group against an active threat simulator is
presented.

An active threat, besides its own mission accomplishing,


is defending itself from the swarmers, in a way that, apart
from its moving towards its assigned final position, is
trying to destroy those APMs which in the meantime
could find themselves in the range of the threats own
weapons. In that way, the number of AMPs is decreasing,
as well as the probability of their swarming tactics
success.

2. SWARMING SYNCHRONIZATION
CONCEPT
If AMPs applying swarming against self an active threat
dont do that in a synchronized way, i.e. they dont reach
the distance from threat within the range of their main
weapons in approximately same time, there is a
probability of swarming failure, because the threat could
destroy AMPs one by one, as they appear within the range
of its own weapons. To solve that problem, the
synchronizing concept of the AMPs participating in
swarming has been introduced in the algorithm,
functioning as follows (Figure 2):
a. The synchronization zone (SZ) has been introduced
around the active threat, which is the circular ring

The algorithm and the results of the experiments with the


developed simulator of swarming of a group of AMPs
against a single passive threat have been given in [9], and
the algorithm of simulator of swarming against multiple
passive threats in [10]. During the development of
swarming simulator against passive threat, the
mechanisms for the simulation of movements of a threat
and swarmers have been elaborated, as well as the ones
169

ONALGORITHMOFSYNCHRONIZEDSWARMINGAGAINSTANACTIVETHREATHSIMULATOR

instance t = Tm-j), functional dependence of coordinates


change in the simulated time.

represented by the expression:

Dzp = RP j (1 q)

OTEH2016

(3)

Where Dzp is the distance from the threat for which


has been defined that the AMP is in SZ, RP-j is the
maximal efficient range of the threat weapon, and q is
a number between 0 and 1, defining the ring width,
which has been determined separately for every pair
of AMP-i and threat P-j.
b. If there are enough AMPs at the distances which
enable their efficient actions, synchronization is no
longer needed and such AMP can apply successful
swarming.
c. If that is not the case, the AMPs which are further of
the SZ (AMP1, AMP-2, AMP-3, AMP=4 and AMP-6
in Figure 2) will continue to approach the threat by the
shortest possible way.
d. AMP in the SZ are moving in parallel with threat,
based on so far known course and velocity of the
threat, obtained from the C4I system (AMP-5 and
AMP-7 in Figure 2).
e. AMPs situating between the threat and SZ, will move
away of the threat, directing their velocity vectors
contrary of the last known position of the threat
(AMP-8 in Figure 2).
Such a mechanism enables that, while there are not
enough AMPs for successful swarming:
AMPs far from the threat continue to approach it by the
shortest possible way;
AMPs in the SZ, are moving in parallel with
target/threat at a relatively safe distance, ready to move
again towards the threat, when enough AMPs gather for
successful swarming cumulative effect;

Figure 3. Synchronized swarming simulator algorithm

AMPs near target/threat are moving away from it


towards SZ, in order to avoid individual destruction and
swarming failure.

Threats weapons characteristics: effective range RP-j


(m), minimal time between repeated action and
probability of target hitting as function of distance.

3. SYNCHRONIZED SWARMING
SIMULATOR ALGORITHM

Characteristics of AMPs applying synchronized


swarming against an active threat: their total number N,
maximal velocities Vm-i (m/s) weapons ranges DW-i
(m), status Si (existing = 1, destroyed = 0),
compatibility of AMP-i weapon W-i with a threat Kij
(0/1), possible effect of weapon W-i against threat P-j,
Uij.

The synchronized swarming of an AMPs group against an


active threat simulator is presented in figure 3. The
algorithm came from the basic algorithm for simulation of
swarming against a passive threat, which has been
upgraded by data structures and control mechanisms for
models that imply existence of an active threat, equipped
with resources for fighting against AMPs participating in
swarming.

Initial AMPs deployment, as a set of pairs of


coordinates xi (0) and yi (0) for t = 0. It can be random,
for simulation of the worst case when AMPs dont
expect appearance of a threat, or prescribed when some
information of appearance and mission of a threat are
known in advance. The latter case is used when the goal
of simulation is the research of the optimal initial AMPs
deployment for defence of such threat by means of
swarming tactics.

At the activation of the simulator, the initialization is done


first, comprising entering the following data:
C4I systems information renovation t (s)
Dimensions of simulated territory in which combat
activities take place: maximal values of coordinates xmax
(m) and ymax (m)

Upon the simulator activation, the status of every armed


mobile platform, AMP-i, i = 1, , N, is checked up in the
main program loop.

Dynamic law of the threat movement: initial threats


coordinates (xp, yp at time instance t = 0), final threats
coordinates upon accomplished mission (xk, yk at time

If during the initialization the platform AMP-i has not


170

ONALGORITHMOFSYNCHRONIZEDSWARMINGAGAINSTANACTIVETHREATHSIMULATOR

coordinates of its next position are calculated based on


its maximal velocity and the trajectory which it will
pass in time interval t. The goal of such motion of
AMP-i is that it will find itself within the
synchronization zone SZ as soon as possible.

been assigned to participate in swarming against the threat


P-j, or its weapon is not compatible with the threat P-j, or
AMP-i has already been destroyed by active threat (Si =
0), which is presented by the following logical
expression:

Aij Kij Si = 0

If the platform AMP-i is out of SZ, but within the


effective range of the threats weapon, it is directed
opposite of the last known threats position provided by
the C4I system; the coordinates of its next position are
calculated based on its maximal velocity and the
trajectory which it will pass in time interval t. The
goal of such motion of AMP-i is to try to avoid its
destruction by the active threats weapon and to find
itself within the synchronization zone SZ as soon as
possible.

(4)

the simulator proceeds to trial the next platform, AMP(i+1).


If condition (4) is not fulfilled, it is checked whether the
distance Dij, from AMP-i to threat P-j, is less or equal to
threats weapon effective range, RPj.
If Dij RP-j, the threats P-j fire action at the platform
AMP-i is simulated by sampling the AMP destroying
probability distribution. That distribution depends on the
kind of threats weapon and the kind of AMP at which the
threat applies its fire action, and is entered at the
simulator initialization.

If the platform AMP-i is within SZ, it continues motion


by the last course and velocity which the threat P-j had
until that moment, based on the last information
provided by the C4I system. The coordinates of the
platforms next position are calculated based on that
velocity, course and the trajectory which it will pass in
time interval t. The goal of such motion of AMP-i is
to keep itself at safe distance, in the synchronization
zone SZ, and to follow the threat, moving parallel with
it and by its velocity, until the moment when enough
AMPs gather in SZ, which ensures that their cumulative
effect is greater of the critical threshold PKUij, i.e. the
condition (2) is satisfied and the success of the swarm is
provided.

In general case the probability of destroying AMP-i is a


descending function of its distance to the threat Dij, with
the value of 100% for Dij = 0, the value defined by the
type of weapon at the distance equal to the weapon
effective range Dij = RPj and the value of 0 for distances
greater than RPj.
If the outcome of the threat unit P-j fire action is the
AMP-i destruction, the value of Si = 0 is assigned to its
status and the simulator proceeds to trial the next platform
AMP-(i+1).

The coordinates of the next position that the active threat


P-j should reach at the moment t = t + t, are calculated
based on its assigned law of the motion, which is entered
during the simulator initialization.

If AMP-i has not been destroyed, it is checked whether it


is within the synchronization zone (SZ), i.e. whether the
following condition is fulfilled:

ROR j q Dij ROR j + q

All platforms AMP-i, i =1, N, are analyzed in the


described way, and then the simulation clock is
incremented by 1 basic interval t. If a new value of
simulated time is greater than the threats mission
accomplishment time Tm-j, it is concluded that the threat
has reached its final position before the fulfilment of the
condition (2), so the simulation of swarming against the
active threat is ended by failure.

(5)

If condition (5) is fulfilled, the value of possible


cumulative effect of all AMPs temporarily present in SZ
is updated, on the basis of the expression:
Kuj = Kuj + Ui

OTEH2016

(6)

If a new value of simulated time is still less than Tm-j, the


threat P-j, and all platforms AMP-i are moved to their
next positions, and the simulator begins new pass through
the main loop of checking the status of all swarming
platforms AMP-i in the described way.

If the updated value of possible cumulative effect of all


AMPs temporarily present in SZ is greater than its critical
threshold PKUj, the condition defined by the expression
(2) is satisfied, and the swarm of AMPs has succeed to
neutralize the active threat Pj. If not, the simulator
proceeds to trial the next platform AMP-(i+1).

5. CONCLUSION

If condition (5) is not satisfied, the platform AMP-i is not


within SZ, the coordinates of its next position are
calculated, and then the simulator proceeds to trial the
next platform AMP-(i+1).

The synchronization problem in swarming tactics is


significant, because the introduction of synchronization is
an attempt to reduce the failure probability in swarming
against an active threat.

The coordinates of the next positions, i.e. the new


deployment of the swarming AMPs in time t = t + t, are
calculated in the following way:

It comes to the fore in situations when, due to arriving of


armed mobile platforms at the distance of their possible
fire actions in different time instances, an active threat can
destroy them one by one, and question the successful
outcome of swarming tactics application.

If the platform AMP-i is outside SZ and out of the


threats weapon effective range, it is directed to the last
known threats position provided by the C4I system; the

A new swarming synchronization concept has been


171

ONALGORITHMOFSYNCHRONIZEDSWARMINGAGAINSTANACTIVETHREATHSIMULATOR

presented in the paper, based on introduction of the


synchronization zone and various procedures of armed
mobile platforms, dependant on their temporary positions
related to active threat and synchronization zone.

OTEH2016

search and attack UAV swarms, Proceedings of the 2006


Winter Simulation Conference, Monterey, CA, USA,
2006. pp.1308-1315.
Nowak,D.J., Price,I., Lamont,G.B.: Self organized UAV
swarm planning optimization for search and destroy
using SWARMFARE simulation, Proceedings of 2007
Winter Simulation Conference, Washington, DC, USA,
2007. pp.1315-1323.
Pohl,A.J., Lamont,G.B.: Multi-objective UAV mission
planning using evolutionary computation. Proceedings of
the 2008 Winter Simulation Conference, Miami, FL,
USA, 2008. pp.1268-1279.
Singer,P.W.: Wired for war? Robots and military
doctrine, Joint Force Quarterly, 2009,52(1), pp.105-110.
Jankovic,R., Milinovic,M., Jeremic,O., Nikolic,N.: On
Application of Discrete Event Simulation in Armoured
and Mechanized Units Research, Proceedings of the 1st
Internationa Symposium & 10th Balcan Conference on
Operational Research, Thessaloniki, Greece, 2011. Vol.2,
pp. 28-35.
Jankovic,R.: Computer Simulation of an Armoured
Battalion Swarming, Defence Science Journal, vol. 61,
no.1, pp. 3643, January 2011.
Jankovic,R.: Data structures and control mechanisms for
multi-target swarming simulators, Electronic Letters, vol.
48, no. 16, pp. 997-998, 2012.

Based on such synchronization concept, the new


algorithm of a group of armed mobile platforms
synchronized swarming against an active threat has been
developed and presented.
The described algorithm has been developed started from
the basic algorithm of a class of simulators of military
swarming systems, realized in so far research, especially
for the application of new tactical procedures in use of
armoured and mechanized units.
The basic algorithm, for unsynchronized swarming
against passive threats, has been improved by
mechanisms and data structures, which allow the
implementation of synchronization concept in swarming
against active threats.
References
Arquilla,J., Ronfeldt,D.: Swarming and the Future of
Conflict, Rand Corporation, 1999.
Edwards,S.J.A.: Swarming on the Battlefield Past,
Present and Future, Rand Corporation, 2000.
Edwards,S.J.A.: Swarming and the Future of Warfare,
Rand Corporation, 2005.
Price,I.C., Lamont,G.B.: GA directed self-organi-zed

172

DETERMINING PROJECTILE CONSUMPTION DURING INDIRECT


MORTAR FIRE
ACA RANDJELOVI
University of Defense, Military Academy, Belgrade, aca.r0860.ar.@gmail.com
VLADO DJURKOVI
University of Defense, Military Academy, Belgrade, vlado.djurkovic@va.mod.gov.rs
PETAR REPI
Signal Brigade, Belgrade, repic92@gmail.com

Abstract: Determining projectile consumption is one of more important factors for achieving success when conducting
indirect fires tasks and depends on the accuracy of preparation of initial elements, mathematical expectation of number
of hits and probability of hitting the target. This paper shows the procedure for determining projectile consumption of a
120 mm mortar platoon during indirect engagement of predetermined target in accordance with the predefined number
of required direct hits, based on the developed model, and by applying appropriate formulas, coefficients and table
values.
Keywords: projectile consumption, probability of hitting the target, mathematical expectation of number of direct hits,
indirect fire.
determining elements of group engagement and projectile
consumption allowance. Also, projectile consumption is
greatly affected by the probability of hitting the target and
mathematical expectation of the number of hits that
combined with the response time represent basic factors
of efficiency [2].

1. INTRODUCTION
Mortar units are the main carriers of fire support for
infantry and mechanized battalions in the Serbian
Military. They execute their tasks by engaging the target
with accurate and precise fire of 120 mm and 82 mm
mortars.
By engaging their targets indirectly, mortar units achieve
better protection of their resources from the enemy fire
and take advantage of their maximum effective ranges in
a more effective way when executing their combat tasks.
However, indirect fire procedure requires more time, and
preparation for engagement is more complex with higher
projectile consumption.
Indirect mortar fire consists of two phases: preparation
and execution (Image 1.). Those two phases consist of a
number of activities that are executed step by step, the
right way and quickly. This provides right timing,
accuracy of fire and safety during the execution phase.
Group engagement is the final phase of indirect
engagement through which the main fire effect of combat
systems is achieved and the greatest material and/or moral
effects that are of huge importance for further operations.
By engaging the target in groups, mortar units can
neutralize, block, destroy or obstruct the enemy [1].

Image 1. Activities during indirect engagement

Efficiency of execution of listed tasks depends on


multiple interdependent factors out of which, determining
projectile consumption is the most important one.

2. EFFICIENCY OF MORTAR UNITS


Efficiency of mortar units represents the degree of success
of task execution expressed as impact of engagement and
projectile consumption or probability of successful task

Projectile consumption is determined after the assigned


task, area and/or target type, distance, ways of

173

OTEH2016

DETERMININGPROJECTILECONSUMPTIONDURINGINDIRECTMORTARFIRE

execution. It is described as the efficiency of engagement


as well as the capability to execute other tasks from the
domain of fire support, in various weather, spacial and
combat conditions [3]. Efficiency of group engagement is
achieved by:
- determining elements of group engagement as
accurately as possible,
- using appropriate number of weapon systems and
sufficient amount of projectiles,
- proper choice of projectiles, charges and aims, that is,
proper choice of engagement and type of fire,
- uniformly distributing fire in the target area,
- using the effect of surprise and
- timely corrections of elements during the execution of
group engagements.

formula [7]:

The effect of group engagement is the realistic indicator


of success of mortar fire and it is the result of indirect
fires. It is evaluated based on accomplished explosions on
the target and the degree of accomplished fire effect and it
is determined by: the size of deviation (error) of the
middle hit from the center of the target, by the number of
direct hits and the number of lethal pieces as well as
projectile consumption.

By applying given formula, for a 120mm mortar, we


calculated the probability of hitting a surface square
shaped target, with sides of length of 100m on the
distance of 4000m and assumption that the middle hit is
in the center of the target (Sp = Cc), and by different
accuracies of initial elements (Table 2).
Table 2. Comparative display of different probabilities of
hitting the target with a 120mm mortar
num
1.
2.
3.

Mortar units achieve efficient execution of the assigned


task by engaging the target in groups and by: using the
elements obtained through total preparation of initial
elements, elements obtained through fire correction and
by shifting fire on the topographical-geodetic basis (TGB)
by using data from fire correction [4]. Listed ways are
followed by certain errors shown in Table 1.

16

3.

Fire correction on the target

0,5 - 2,2

0,9 - 9,0

4.

Shifting fire within allowed limits


from the target

1,3 - 1,5

2,5 - 3,5
120mm mortarplatoon

Demonstrated values maintain the degree of accuracy of


initial elements preparation which have a great influence
on group engagement and its efficiency for accomplishing
end (planned, desired) effects on target.
Accuracy of preparation of initial elements directly
influences the probability of hitting the target and
projectile consumption. Listed accuracy proportionally
affects the probability of hitting, and inversely
proportional on the consumption of projectiles. This
implies that the probability of hitting is greater and
consumption is smaller if group engagement is executed
with elements of complete preparation, which makes
indirect mortar fire more accurate and precise [5,6].

preparation
shortened

Number of hits

Mathematical
Expectation

4
Projectile
consumption

6
Unit

2.

Probability of hitting
(%)
70,35
33,38
18,17

Table 3. Mathematical expectation of the number of hits


and projectile consumption

Mean error
x Vd
y Vp

Number of
mo

1.

Elements of
engagement
table
Complete preparation
Shortened preparation

Comparative analysis of obtained results leads to the


conclusion that the value of probability of hitting the
target with elements obtained through complete
preparation is closer to probabilities of table values, and
that it is greater than probability of hitting the target with
elements obtained through shortened preparation of initial
elements.

Table 1. Numerical values of errors


The way of transition to group
engagement
Complete preparation of initial
elements
Shortened preparation of initial
elements

where:
v
- probability of hitting a surface target
x
- deviation from the middle hit to the center of the
target measured by distance,
y
- deviation from the middle hit to the center of the
target measured by direction,
a
- half of the length (depth) of the target,
B
- half of the width of the target,
Vd - probable deviation by distance,
Vp - probable deviation by direction.

Taking into account listed elements that determine the effect


of group engagement, it is possible to find out that it
increases with the decrease of error and increase of number
of direct hits with as low projectile consumption as possible.

Num

) (

V = 1 x + a x a x + b x b (1)
4
Vd
Vd Vp
Vp

Number of fired
projectiles (n)
Probability of
hitting(p)
Number of direct
hits
(=nxp)
Number of desired
hits()
Probability of
hitting (p)
Projectile
consumption
(n=a/p)

complete
40

0,1817

0,3338

7,26 7

13,35 13
20

0,1817

0,3338

110,07 110

59,91 60

Influence of accuracy of preparation on projectile


consumption has been analyzed for a 120mm mortar
platoon for achieving 20 direct hits via mathematical
expectation of the number of hits when firing 40
projectiles (10 per mortar) and by applying the calculated
probability of hitting with complete and shortened

Probability of hitting a surface target (square or


rectangle shaped) is calculated using the following
174

OTEH2016

DETERMININGPROJECTILECONSUMPTIONDURINGINDIRECTMORTARFIRE

following model:

preparation of initial elements in accordance with the


above stated conditions of firing (Table 3).

Up = 11501,30111 = 195 projectiles

Comparative analysis of obtained data leads to conclusion


that by using complete preparation of initial elements we
can expect greater number of direct hits and lower
projectile consumption, which makes the unit more
efficient in solving assigned tasks for up to 45%.

Dividing calculated result by the number of weapon


systems in a mortar platoon gives us projectile
consumption per system:
Up =

3. FIRING TASK MODEL

Up
= 195 = 48, 75 49 projektila (3)
br. orudja u vodu
4

Multiplying projectile consumption per system (Up') by


the number of mortars in a mortar platoon gives us total
projectile consumption for given task execution, which
is 196 projectiles in this case (49 projectiles x 4 mortars).

Firing task model based on which projectile consumption


is determined is founded on following assumptions:
task is conducted by a 120mm mortar platoon
consisting of 4 mortars with a light TF M62P3 mine,
with a UTU M62 fuse, set to immediate,
target has area of 1ha - sides 100 x 100 m (air defense
artillery battery),
group engagement is conducted with elements of
complete preparation,
distance to target is 4000 m,
the goal is to neutralize the enemy with the degree of
30%,
probability of hitting the target (p) with the first
projectile is 33,38%,
number of expected/desired hits () is 40 projectiles,
allowed number of projectiles (n) for combat task
execution (firing) is 160 projectiles.

In accordance with the allowed number of projectiles for


task execution, mathematical expectation (ME) was
calculated in accordance with the probability of hitting the
target (p):
MO = n x p = 160 x 0,3338 = 53,41 53 hits

(4)

In accordance with the developed model and conditions


for task execution, projectile consumption was calculated
(n) for getting expected/desired hits () on target
depending on the probability of hitting the target (p):
n = a = 40 = 119,83 120 projektila
p 0,3338

In accordance with the given conditions for task execution


we determined: total number of projectiles for a unit
(necessary for task execution), number of projectiles for
each mortar, mathematical expectation of the number of
hits with allowed number of projectiles, as well as the
necessary number of projectiles in order to accomplish
desired hits.

(5)

Based on calculated results, we can get the picture of the


effect of projectile consumption on task execution and
ability to execute indirect fire tasks in accordance with the
commander's intent.
In case of the model mentioned above, it can be
concluded that the allowed amount of projectiles (n) does
not enable target neutralization with the degree of 30%,
however, it provides 40 direct hits on target with fewer
projectiles than allowed.

4. PROJECTILE CONSUMPTION FOR


FIRING TASK EXECUTION

With this conclusion, the commander and superior


command get a realistic picture of possibilities of mortar
systems under their command, and as a result of that a
proposal to increase the number of projectiles or reduce
the degree of neutralizing the target under 30% can be
made.

Calculations of projectile consumption provide realistic


image of assigned tasks of mortar units for commanders
and superior command.
Projectile consumption necessary for firing tasks
execution is calculated using the following formula [8]:
Up = Pc(ha) N kp kdg kpr kvgu (2)

5. CONCLUSION

where:
- Pc(ha), target area = 1,
- N, table consumption for target neutralization(5E) on Dg
10 km and En = 25 % = 150 projectiles,
- kp, transition coefficient for En = 30 % = 1,30,
- kdg, distance coefficient for Dg = 5 km (Dg 10 km)
= 1,
- kpr, preparation coefficient (complete preparation of
initial elements) = 1,
- kvgu, type of fire and fuse coefficient (fuse with current
effect) = 1.

The importance of determining projectile consumption


was highlighted in this paper as well as its impact on
solving indirect firing tasks in accordance with
commander's intent.
Critical analysis of the effect of accuracy of preparation
of initial elements on the probability of hitting the target
was conducted, as well as its effect on mathematical
expectation of the number of hits and projectile
consumption and their causal relationship with the
efficiency of mortar units.
All in all, a valid calculation of projectile consumption
was conducted in accordance with conditions given by the
fire task model, based on whose results appropriate

By inputting table values into the formula, we get the


estimate of projectile consumption for a given task by the
175

DETERMININGPROJECTILECONSUMPTIONDURINGINDIRECTMORTARFIRE

conclusions were made and a realistic possibility for task


execution was estimated for a 120mm mortar platoon.

[2] Kokelj,T.: Zbirka resenih zadataka iz teorije


artiljerijskog gadjanja, VIZ, Beograd 1999.
[3] Kova,M.:
Metod
odredjivanja
efikasnosti
artiljerijskih i raketnih jedinica za podrsku, Sektor
za skolstvo, obuku i NID, VIZ, Beograd, 2000.
[4] Randjelovi,A., Djurkovi,V., Kokelj,T.: Uticaj
tacnosti pripreme pocetnih elemenata posrednog
gadjanja
na
izvrsenje
grupnog
gadjanja
minobacacem 120 mm, 6. Medjunarodna naucna
konferencija OTEH, Beograd, 2014.
[5] Kokelj,T., Regodi,D.: Tacnost potpune pripreme
pocetnih
elemenata
posrednog
gadjanja,
Vojnotehnicki glasnik, br. 2, 2005.
[6] Randjelovi,A., Kokelj,T., Komazec,N.: Tacnost
skracene pripreme pocetnih elemenata posrednog
gadjanja minobacacem 120 mm, 17. medjunarodna
DQM konferencija, Beograd, 2014.
[7] Kokelj,T., Randjelovic A.: Zbirka resenih zadataka iz
pravila gadjanja naoruzanjem pesadije, VIZ,
Beograd 2010.
[8] ivanov,Z.: Teorija gadjanja, VIZ, Beograd 1979.

Listed calculations were done by applying appropriate


formulas and table values of coefficients that enable us to
obtain usable data which represent the base for making
proposals about more efficient engagement of mortar
units and possible demands to change the type of
engagement on the given target.
Based on that, it can be concluded that projectile
consumption represents an important factor of task
execution and together with the probability of hitting the
target and accuracy of preparation of initial elements and
group engagement, influences the degree of
accomplishment of fire effects on target during task
execution.
Also, determining projectile consumption defines the
possibility of achieving desired number of direct hits and
decision making and setting realistic tasks by the
commander and the superior command.

References
[1] Artiljerijsko pravilo gadjanja, SSNO,
artiljerije/, VINC, Beograd, 1991.

OTEH2016

/Uprava

176

PROPELLER AND SHIP MAIN EGINE SELECTION IN CORRELATION


WITH OVERALL EFFICIENCY PROPULSION COEFFICIENT
IMPROVEMENT
JOVO DAUTOVI
Technical Test Center, Vojvode Stepe br. 445, Belgrade; e-mail jovodaut@gmail.com
VOJKAN MADI
Technical Test Center, Vojvode Stepe br. 445, Belgrade; e-mail vmadic@gmail.com
SONJA URKOVI
Military Technical Institute, Bulevar vojvode Bojovica br.2, Belgrade; e-mail sonja.djurkovic@yahoo.com

Abstract: This paper presents propeller selection method when the power of the ship engine is known and also influence
of correct propeller selection on a significant improvement in overall efficiency propulsion coefficient. After that it
shows the method of main engine selection in case of vessel modernization, i.e. replacement of one type with another
type the drive when retain the old propellers. Main engine selection, in that case, is shown in the case of selection
electric motor drive of the ship KOZARA in the process of its modernization during which diesel engine propulsion is
replaced by diesel electric propulsion. In the end, power shaft measurement results on the shaft lines is shown which
confirms main engine correct selection.
Keywords: propeller selection, electric motor selection, propeller curve, shaft power measurement, performance
recording.
and drive engine should be such that the operating point
of the diesel engine always is inside the diesel work areas
for which the engine is designed, in order to increase the
overall efficiency coefficient [1].

1. INTRODUCTION
During designing of the ships special attention should be
paid to the appropriate propeller and drive engines
selection and their mutual pairing.

For proper utilization of the possibilities for the engine, it


should be possible to achieve maximum revolution per
minute (rpm) speed at which the engine has maximum
power.

During operation of the ship it is usually to do


modernization after years of use, which often includes
replacement of the one type drive engines with the other
one. In this case, usually retain the old propellers, and
attention should be paid to the proper selection of the new
drive engines and they must be matched with the
propellers.

If propellers match with maximum rpm in reducing rpm


of a percentage, engine power will be reduced in
accordance with the propeller curve. In other words, the
shaft torque curve will not be in accordance with the
engine torque curve than the lower the full power of the
engine would never be used.

An example for this case is the modernization of ship


"Kozara", which was carried out during 2011 and 2012.
During this period diesel engine propulsion was replaced
with diesel electric propulsion (DEP). DEP is consisting
of two diesel engines, two electric drive motors with
gearboxes and bow thrusters. That type of modernization
was performed for the first time on a Serbian warship.
The main difference between diesel propulsion and DEP
is that in a diesel propulsion system propeller is driven by
diesel engine and in DEP system propeller is driven by
electric motor. Electric motor power supply is realized
from the generators which are driven by diesel engines.

Figure 1 shows the corresponding characteristic of diesel


engine and propeller curves [2].
For constant torque (ie. max torque):
2

(2.1)

which indicates that the power is proportional to the rpm


(figure 1). If the torque is not constant, it is
assumed that the power changes per cubic parabola curve
as a function of number of revolutions
2
, which means that the torque is proportional to the
square of rpm.

2. PROPELLER SELECTION
Significant improvements in overall efficiency coefficient
on ship standard drive systems mainly achieved by proper
selection of the propeller. Mutual working of propeller

(2.2)
177

PROPELLERANDSHIPMAINEGINESELECTIONINCORRELATIONWITHOVERALLEFFICIENCYPROPULSIONCOEFFICIENTIMPROVEMENT

which means that the torque coefficient


is constant

(
). If it is known that the fixed blade
), then
propellers advance ratio is constant (
the propeller thrust coefficient
is also constant

is constant then
, but if
). If
(
and
(or
for
J is constant then
is
permanently reduce thrust force
).
assumption in the normal speed range of displacement
ships, and power formula is:

OTEH2016

Note also the upper rpm limit on the (design) line 1 say
point D at 105% maximum rpm. At this point the full
available power will not be absorbed in the clean hullcalm water trials, and the full design speed will not be
achieved. In a similar manner, if the ship has a light load,
such as in ballast, the design curve 1 will move to the
right (e.g. towards line 1a ) and, again, the full power
will not be available due to the rpm limit and speed will
be reduced.
These features must be allowed for when drawing up
contractual ship design speeds (load and ballast) and trial
speeds.

(2.3)

is defined by
The basic assumption is made that
the engine manufacturers as the Propeller Law and they
design their engines for best efficiency (e.g. fuel
consumption) about this line. It does not necessarily mean
that the propeller actually operates on this line, as
discussed earlier.
It is important to match the propeller revolutions, torque
and developed power to the safe operating limits of the
installed propulsion engine. Typical power, torque and
revolutions limits for a diesel engine are shown in Figure
2, within ABCDE [2].

Figure 1. Matching of propeller to diesel engine


It should be noted, however, that the speed index does
change with different craft (e.g. high-speed craft or very
, where x
slow displacement ships), in which case
will not necessarily be 3, although it will generally lie
between 2.5 and 3.5.
Typically it is assumed that the operator will not run the
engine to higher than 90% of its continuous service rating
(CSR).
The marine diesel engine characteristics are usually
,
based on the propeller law which assumes
which is acceptable for most displacement ships. The
basic design power curve passes through the point A.

Figure 2. Typical diesel engine limits (within ABCDE)


It should be noted that the propeller pitch determines at
what revolutions the propeller, and hence engine, will run.
Consequently, the propeller design (pitch and revolutions)
must be such that it is suitably matched to the installed
engine.

When designing the propeller for say clean hull and calm
water, it is usual to keep the actual propeller curve to the
right of the engine ( line), such as on line 1 , to allow
for the effects of future fouling and bad weather. In the
case of line 1 the pitch is said to be light and if the
design pitch is decreased further, the line will move to
line 1a , etc.

3. ELECTRIC ENGINES SELECTIONS ON


THE SHIP KOZARA
For the correct selection of electric drive engines on the
ship Kozara it was necessary to calculate the power
which engines deliver to the propellers. The first step was
to calculate power which was delivered to the propeller
by old diesel engines. The transmission of power from the
diesel engine to the propeller produces shaft transmission
losses which are expressed by shaft transmission
. For calculation of shaft transmission
efficiency
efficiency next coefficients are taken into account: thrust

As the ship fouls, or encounters heavy weather, the design


line and then
curve 1 will move to the left, first to the
to line 2 . It must be noted that, in the case of line 2 , the
maximum available power at B is now not available due
to the torque limit, and the maximum operating point is at
C, with a consequent decrease in power and, hence, ship
speed.

178

PROPELLERAND
DSHIPMAINEGINESELECTIONINCO
ORRELATIONWITHOVERALLEFFICIENCYPROPULSIONCOEFFICIENTIM
MPROVEMENT

bearing (0.999), steady beearing - 9 pieces


p
(0.995)) and
bulkhead stuuffing box - 3 pieces (0.995). In this way shaft
transmission efficiency is
.

(3.4))
hich power iss
EM calculated poower is 287.66 kW. EM wh
w estimatedd
250 kW is chosenn for ship proopulsion. It was
that ship with 2550 kW EM will achieve the requiredd
speed. In the asseessment was ttaken into acccount the factt
that ship with thhe new drivve engines has a smallerr
n of the drivee
displlacement. Thee influence onn the selection
EM has had a huuge differencee in the purchase price off
E higher poower. If selected power EM
M
seleccted EM and EM
enterred in the aboove formula itt can be obtaiined power off
the engine
e
whichh can be perm
manently delivered to thee
prop
peller and that equals
=238 kW.

g
Delivered poower in case of classic dieesel drive is given
by:
(3.1)
where

OTEH2016

wer.
nominal dieseel engine pow

Figure 3 shows an old shiip propulsion system with diesel


d
drive engine.. Left and righht drive system
ms are identicaal.

After calculating the electric m


motor power itt is necessaryy
to seelect the type of electric mootor. Asynchrronous motorss
havee been selectedd because of iits advantagess compared too
the other
o
electric motors.
m

Figure 3. Diesel engiine drive proppulsion system


m
o the
1 diesel enngine; 2 thruust bearing; 3 , 5- first part of
propeller shaaft steady bearrings; 4 bulkkhead stuffingg box
I;6, 8 seconnd part of the propeller shaaft steady bearrings;
7 - bulkheead stuffing box II; 9, 10 - third
t
part of thhe
propeller shaaft steady bearrings; 11 - bullkhead stuffingg box
III; 12 sterrn tube;13 stern boss; 14 bracket bearring.

Two
o electric motoors, each havinng a power off 250 kW typee
B6A
AZJ 354-04 prroduced by K
Koncar GIM, are installedd
in th
he ship Kozzara. The traansfer of pow
wer from thee
electtric motor to the propeller is carried ou
ut through thee
gearb
box. An elecctric motor drives a propeeller. Electricc
moto
ors speed and
a
torque control is carried outt
autom
matically via voltage frequeency converteers.

After calcullating of pow


wer which old
o diesel enngines
delivered to the propelleers it was staarted with electric
mation.
drive enginess power estim

Two
o gearboxes ZF W350-1 produced by ZF
F
Fried
drichshafen AG
A of Germanny, the transmiission ratio off
1: 3,,968, are instaalled in the shhip in order to
o transmit thee
poweer from the EM
M to the propeller.

Figure 4 shoows a diesel electric


e
propuulsion system. The
new propulsion system allso consists of
o an identicaal left
and right system. In addiition, the figuure 4 also shoows a
bow thruster.

Com
mpared to thee other variannts of power transmissionn
from
m the drive engine
e
to proopellers, electric drive hass
several advantagees that can bee seen from th
he diagram inn
ure 5 [3].
Figu

Figure 4.
4 Diesel electtric drive proppulsion system
m
1 dieseel engine; 2 generator; 3 electric drivee
engines; 4 gear box; 5 bulkhead stuuffing box I; 6,
6 7
propeller shaaft steady bearrings; 8 bulkkhead stuffingg box
II; 9 stern tube;
t
10 sterrn boss; 11 bracket
b
bearinng; 12
boow thruster
At the beginnning, the lossses in the trannsmission of power
p
from the eleectric motor to the propelller are estim
mated.
Gear box efficiency (ttogether witth thrust beearing
(0.98) andd shaft transm
mission efficiency
efficiency)
(0.97) aree taken into acccount.
Delivered power
p
in case of DEP is givven by:
(3.2)
which indicaates that electrric engine pow
wer is calculated by
the formula:

Figure 5. Propeller, diesel enngine and elecctric motor


curvees
- electric motoor curve;
- diesel motor curve;
overloaded propeller
p
curvee;
- propelller curve

(3.3)
mula is entereed
instead of
w be
If in the form
it will
calculated thhe electric mottor (EM) pow
wer
, in ordder to
DEP gave thhe same deliveered power too the propeller like
diesel enginee:

It caan be seen on diagram that drive diesel engine,


e
whichh
is most
m commonlyy used on inland waters caan change thee
speed in the rangee from about 440%
to 100%
(from
m
ng regime to 100% of nominal rpm). Speeed above thiss
idlin
179

PROPELLERANDSHIPMAINEGINESELECTIONINCORRELATIONWITHOVERALLEFFICIENCYPROPULSIONCOEFFICIENTIMPROVEMENT

OTEH2016

values of power

range can be increased only in the short term, usually up


to 110%
. It is not possible even to reduce the rpm
below this range, because it can cause the engine stop.
The ship load curves are often very different. The
example is loaded or unloaded push boat or a tugboat in
free run regime or in tow regime. The figure 5 shows two
cases of propeller curve, one when propeller is overloaded
and the other when the propeller curve has shape of cubic
parabola as a function of rpm.

Number
1.
2.
3.
4.
5.

Torque
(Nm)
510,68
1925,27
4123,54
6425,7
6758,64

Shaft speed
(min-1)
101
202
301
375
393

PowerP
(kW)
5,4
40,7
129,9
252,2
278

Figure 7. shows diagrams of torque and shaft power as


function of the rpm.

At the electric engine transmission power increases


linearly with the propeller speed of rotation up to a
nominal value, but from that point it is possible to
increase the speed of rotation up to 60% above nominal
value, with power at a nominal value. As a result, the
electric motor can operate at 100% power for all cases
propeller curve, unlike the diesel engine in case of
propeller curve for overloaded propeller can be operated
at speeds from idling to speed n1. This is the biggest
advantage of electric motor as the drive unit.
4. PROPELLER SHAFT TORQUE

MEASUREMENT AND PERFORMACE


RECORDING
Figure 7. Diagram of shaft power and torque
measurement

By measuring propeller shaft power can be checked is


propeller accurate chosen according to engine. Figure 6
shows an acquisition system for the shaft power
measurement.

Shaft power measurement confirm electric motor power


and the possibility that the motors run in overload of 10%
over a period of one hour.
Based on the results of measuring the torque and power to
the propeller shaft in the expression for calculating the
propeller characteristic:

it is determined coefficient K = 0.00476. The expression


for the ship Kozara propeller characteristic is[7]:

Figure 6. Propeller shaft torque and rpm measurement


It is a multi-channel mobile measuring system SPIDER 8
manufacturer HBM (HBM - Hottinger Baldwin
Messtechnik) SPIDER 8 is connected with the
corresponding sensors strain gages type XY21 6/120 [4]
and optical rpm measuring system AO1 type [5]. By this
measuring system propeller shaft torque and rpm are
measured. Based on the measured propeller shaft power it
could be drawn propeller curve and find a constant
that depends on the characteristics of the propeller.

0,00476

(4.3)

Table 2. Value of power to the propeller shaft , calculated


on the basis of the measured torque P and power
calculated on the basis of the propeller characteristic .

Shaft torque and rpm measurement is made on the left


intermediate shaft behind gear box.
Shaft power, i.e. propeller absorbed power is calculated
by the formula:

Table 2 shows the value of power to the propeller shaft,


calculated on the basis of the measured torque P and
power calculated on the basis of the propeller
characteristic .

Propeller shaft torque measurement on diesel-electric


drive ship Kozara was performed 09.05.2012. during
test trailing at the river Danube.

2
60

(4.2)

(4.1)

Number

Shaft speed
(min-1)

P (kW)

1.

101

5,4

4,9

2.

202

40,7

39,2

3.

301

129,9

129,8

4.

375

252,2

251

5.

393

278

288,9

(kW)

Figure 8 shows diagram power to the propeller shaft,


calculated on the basis of the measured torque P and
power calculated on the basis of the propeller
characteristic , as a function of the propeller speed.

Shaft torque measurement results at different propeller


rpm are shown in table 1 [6].
Table 1. Shaft torque measurement results and calculated
180

PROPELLERANDSHIPMAINEGINESELECTIONINCORRELATIONWITHOVERALLEFFICIENCYPROPULSIONCOEFFICIENTIMPROVEMENT

OTEH2016

1. Results of shaft power measurements confirm good


choice of electric drive engine and the possibility that
the motors run in overload of 10% over a period of
one hour.
2. Good matching of measured and calculated shaft
power results fully confirms the results validity and
measurement power methods
3. Shaft power result measurements also confirm correct
operation of frequency converters. Although there
were possibility to be problems with THD voltage
distortion it turns out that it is not necessary to apply
any filters to reduce voltage high harmonics in the
network[8].

References
[1] V. J. Gitis, V. L. Bondarenko, T. P. Jefimov, J. G.
Poljakov, B. M. urbanov, Theoretical basis of the
exploitation of marine diesel engines (translated
from Russian ) SSNO , Belgrade, 1973.
[2] A. F. Molland_ S. R. Turnock, D. A. Hudson, Ship
resistance and propulsion _ practical estimation of
ship propulsive power-Cambridge University Press
(2011).
[3] B. Bilen, Z. Nikoli, Z. ovagovi, D. Bulovan,
Improvement of the driving characteristics of the
ships with electrical transmission, Institute of
Technical Sciences SANU, Belgrade.
[4] HBM - An Introduction to Measurements using
Strain Gages.
[5] Instructions for measuring the torque on the propeller
shaft of the ship, TOC, C.33.003.
[6] The results of the test vessel, Handover record, the
Ministry of Defence , Belgrade 2012.
[7] J. Dautovi, Diesel electric drive of river military
ships as a method of improving ship maneuvering
characteristics, PhD thesis, Belgrade 2016.
[8] D. Vueti, I. Vlahini, The impact of the serial
inductance to reduce harmonic distortion of power
line commutated inverter electric propulsion system
of the ship , Maritime. god 19. 2005, p. 65-75.

Figure 8. Diagram power to the propeller shaft,


calculated on the basis of the measured torque P and
power calculated on the basis of the propeller
characteristic , as a function of the propeller speed

5. CONCLUSION
It can be concluded that for the proper functioning of the
ship drive system it is important to match the propeller
revolution speed with torque and power of the drive
engine. The propeller pitch determines propeller
revolution speed and also engine revolution speed.
According to this, the propeller (pitch and rpm) must be
designed to be matched with the drive engine.
In the case of ship "Kozara" modernization is shown the
way of drive engine type changing and retain the old
propellers. Drive engine calculating power method is also
shown and acc. to this electric motor type is chosen.
In order to check the accuracy drive engines selection
during the ship trial shaft torque measurement was
performed. Based on the results of torque measurements
electric engine power delivered to the propeller was
calculated.
Based on the shaft torque (power) measurement results on
the ship "Kozara" it can be concluded the following:

181

OPTIMIZATION OF PLANETARY GEARS AND EFFECTS OF THE THINRIMED GEAR ON FILLET STRESS
MILO SEDAK
Belgrade University, Faculty of Mechanical Engineering, Belgrade, msedak@mas.bg.ac.rs
TATJANA M. LAZOVI KAPOR
Belgrade University, Faculty of Mechanical Engineering, Belgrade, tlazovic@mas.bg.ac.rs
BOIDAR ROSI
Belgrade University, Faculty of Mechanical Engineering, Belgrade, brosic@mas.bg.ac.rs

Abstract: Planetary gears take a very significant place among the gear transmissions, and they are widely used in
military and civil industry applications such as marine vehicles, aircraft engines, helicopters and heavy machinery.
Planetary gears are complex mechanisms which can be decomposed into external and internal gears with the
corresponding interaction, which requires geometrical conditions in order to perform the mounting and an appropriate
meshing of the gears during their work. Planetary gears have a number of advantages as compared to the transmission
with fixed shafts such as a compact design, with co-axial shafts, high power density and higher efficiency, which is
achieved by reducing gear weight using thin-rimed gears. The purpose of this paper is to present the optimization
model for the planetary gears, where the objective function is the weight of gears, and functional constraints imposed
upon their respective structural design. Hence, the objective is to minimize rim thickness of the gear in order to achieve
high-performance power transmission and minimize weight. This paper presents the results of an investigation with
finite element analysis (FEM) into the effects of thin-rimmed gear geometry on the root fillet stress distribution.
Keywords: Planetary Gear, Thin-rim Gear, Finite Element Analysis, Internal Gear, Root Stress.
in [5]. Therefore it is necessary to include the bending
fatigue failure and crack propagation that is above all
influenced by rim thickness, into the design phase to
avoid component failure mode. Moreover, it has been
shown that in addition to mass reduction, the rim
thickness introduces gear flexibility that decreases the
influence of internal gear and carrier errors [6]. The
purpose of this paper is to present a generalized
optimization procedure to the minimum volume design of
the planetary gearbox which takes into account the
bending fatigue life of the thin-rim gears.

1. INTRODUCTION
Planetary gear sets are of fundamental importance in
many applications that require high performance
mechanical energy transmission systems and are widely
used in automotive, military and aerospace industries.
Because of their larger torque-to-weight ratio, compared
with fixed shaft transmission systems, and other
numerous advantages, planetary gears found their
application in a rotorcraft transmissions, vehicle
transmissions systems and jet propulsion systems [1-2]
etc.

The usual methods of gear design and analysis are based


on the current standards published by the International
Organization for Standardization, German Institute for
Standardization and American Gear Manufacturers
Association. Although, the design and analysis of the
gears has been described and organized systematically in
the existing literature and standards, the procedures for
the design of the planetary gear set still require
specialized design knowledge, thus presenting a good
source for further research. This is mainly due to the
complex geometric and kinematics relations of the
planetary gears that provide mounting and appropriate
meshing of the planetary gears during their work. As a
result the conventional trial and error type of calculation
is time-consuming and the complexity of the considered
problem as well as the accuracy of the final construction
is limited by the tentative choices and designers intuition.

The weight minimization problem has been of


considerable interest, especially in high power density
transmission applications. A number of well-established
works concerning the minimum-mass design problem of
the gear reduction systems include the minimization of
the center distance as well as the volume of the internal
gears by Tong and Walton [3]. More recently, the
application of evolutionary algorithms on minimum mass
gear design problem was considered in [4]. To ensure low
weight of the planetary gear reduction unit the rim of the
internal gear, as well as the other thin-rimed gears in the
planetary set, must be as thin as possible.
However, the rims that are too thin can affects the root
stress distribution resulting in the tooth bending fatigue
failure and cracks. The investigation of the rim thickness
on the fatigue life by finite element method was presented
182

OTEH2016

OPTIMIZATIONOFPLANETARYGEARSANDEFECTSOFTHETHINRIMMEDGEARONFILLETSTRESS

The potential improvement of such design process can be


made employing the computer based optimization process
where the immediate benefits includes the applicability to
the gear design of arbitrary complexity. Using the finite
element (FE) method to calculate stress and deflection in
the case where no analytical of empirical relation is
known, broadens the scope of applicability of
optimization based design.

2. FORMULATION OF THE
OPTIMIZATION PROBLEM
The problem of minimum mass design of spur gears has
been of a considerable interest for a number of
researchers, primarily due to the requirement of low mass
design in high-performance power transmission
applications. Weight minimization is the most frequent
structural optimization problem, generally formulated as a
constrained optimization problem which can be stated as
follows

Optimization process of complex mechanical design


problems often leads to complicated objective functions
with a large number of design variables and highly
nonlinear constrains. Thus, it is not rare that such
optimization problems are multimodal with the discrete
search space. Therefore, conventional gradient-based
optimization methods are ineffective for the solving of
such complex non-differentiable optimization problems
due to the high number of design variables and the
methods disadvantage of being trapped into local minima.
In the past decades, metaheuristic optimization methods
have proven effective against such drawbacks, and have
become powerful methods for solving tough optimization
problems in many fields of science and engineering. The
metaheuristics are able to provide global solution to the
optimization problem with higher number of design
variables.

minW ( x ) ,

(1)

xD

subject to the n equality and m inequality constrains


gi ( x ) 0, i = 1, , m
h j ( x ) = 0, j = 1, , n.

(2)

The objective function of such problem for homogenous


material can be defined as
W (x) =

V ,
i

(3)

i =1

where is the weight density and Vi is the volume of the


i th structural element. For the purpose of the
considerations in this paper the volume of the internal thin
rimmed gear is approximated using the rim thickness as a
main parameter. Therefore, the objective function of the
thin-rim internal gear is given as follows:

Metaheuristics such as Simulated Annealing (SA),


evolutionary algorithms (EAs) and Swarm optimization
algorithms have been a subject of considerable interest in
the solving of complex mechanical power transmission
design problems. Recent work related to the minimum
mass design of spur gears include the application of
genetic algorithm on the mass minimization of single
stage helical gear unit, including the sizing of the shafts
and gear housing [7-8].

W ( x ) = V = b ( mn z + hF )
4

In this paper, an optimization procedure based on


simulated annealing method has been used in combination
with finite element method to generate optimal thin rim
gear configuration subjected to constraints on maximum
stress value. SA models the physical process of heating a
material and then slowly lowering the temperature to
decrease defects, thus minimizing the system energy.
Simulated annealing has the ability to avoid being trapped
in local minima. Therefore, it can provide global optimal
solutions even with discontinuous and non-differentiable
objective functions. The Stressstrain calculations of the
root fillet stress of thin-rim gear are performed on the
model based on FE method.

( H r + 1)2 mn2 z 2 (4)

where H r is the rim thickness, mn is the normal gear


modulus, hF is tooth height and b is the width of the
gear. The rim thickness can be defined as a ratio of the
rim thickness to the root radius
Hr =

Rs rf
rf

(5)

where Rs is the outer and rf is the root radius of an


internal gear.
The corresponding constraints of the minimum mass
optimization problem, take into considerations the
strength of the gear, trough the factor of safety, and
achievable size constraints. Functional constraints in the
form of inequality are given as

The paper is organized as follows. In section 2 the


formulation of the constrained thin rimed gear
optimization problem with appropriate constraints and
feasible domain is given. Moreover, section 2.1 presents
penalty method to convert formulated optimization task
into the unconstrained optimization problem. Sections 2.2
and 2.3 summarize the application of the Conjugate
gradient and Simulated annealing algorithms to the given
optimization problem, respectively. The formulation of
the computational finite element model is given in section
3. The obtained simulation results are presented and
analyzed in section 4. Finally, the future work and
conclusions are drawn in section 5.

g (x) =

[ F ]
F

SF > 0

(6)

where F is the working bending stress, [ F ] is the


allowable bending stress and S F is bending stress factor
of safety. The corresponding size constraints
min max
183

(7)

OTEH2016

OPTIMIZATIONOFPLANETARYGEARSANDEFECTSOFTHETHINRIMMEDGEARONFILLETSTRESS

are included in optimization model. Finally, taking into


consideration the given optimization problem (4) and
corresponding constraints (6) and (7) the nonempty
feasible domain is defined as follows

D = x \n

g (x) 0 h (x) = 0

2.2. Conjugate gradient


Conjugate gradient (CG) algorithm was first proposed as
the algorithm for the numerical solution of a system of
linear equations, and later generalized for the
unconstrained optimization of the non-quadratic functions
by Fletcher and Reeves [10]. The main advantage of CG
algorithm is its simplicity and a rapid convergence,
characterized by the quadratic convergence rate.
Moreover the algorithm does not require any explicit
second derivatives of the objective function making it less
computationally expensive.

(8)

Thus, the vector of design variables has the following


form

x = [ H r , z, mn , hF , b ] .
T

(9)

The descent direction in the initial iteration is chosen as a


direction of the negative gradient of the objective function
at initial point. Thus, the initial iteration of the conjugate
gradient method is the same as of the Steepest Descend
algorithm. Therefore, the descend direction in the initial
iteration is given as

2.1. Penalty method


Penalty method is a popular and intuitive constrainthandling method that successfully approximates the
constrained optimization problem with the unconstrained
optimization problem using the penalty function [9]. The
main idea of the method is to modify the objective
function to a following form
FW ( x ) = W ( x ) +

( )

d ( 0 ) = W x ( 0 ) ,

( )

P (x) ,
i

where W x( 0) is the gradient of the objective function

(10)

calculated at the initial point x( 0) .

i =1

The sequence of the approximations of optimal solution

where Pi ( x ) is the penalty function corresponding to the

{x( ) , k = 0,1,} is generated using the iterative form of


k

i th constraint. The penalty function is introduced to


eliminate any solution that leads to the constraint
violation. Therefore, the constrained weight minimization
problem (1) can be converted into an unconstrained
minimization problem that has the following form

min FW ( x ) = min W ( x ) +

x\ n

x\ n

the Conjugate Gradient method given as

x( k +1) = x( k ) + k d( k ) ,

(11)

i =1

{()

0,
xD
si x Ri x
/ D,
i = 1, , m

>0

>0

(16)

and d( k ) the descent direction of the algorithm in the k th


iteration given by

(12)

where Ri is the penalty factor that measures the


importance of the i th penalty function and si ( x ) is a

continuous function that considers the equality and


inequality constrains, defined as
si ( x ) = gi2 ( x ) , i = 1,, m .

min k ( ) = min W x( k ) + d( k ) ,

The penalty function can be defined as


Pi ( x ) =

(15)

where k > 0 is the step length computed such that the


function of one variable has a local minimum at = k ,
that is

P (x) .

(14)

(k )

( )
( )

W x( k ) ,
k =0
=
.
(k )
( k 1)
+ k d
, k 1
W x

(17)

There are several formulas for the calculation of the


conjugacy parameter k , among them the best-known is
the Fletcher-Reeves [9] formula that quickly finds the
optimum of the convex quadratic function, defined as

(13)

Based on the definition of the penalty function given in


(12) it can be seen that the penalty method directly
evaluates every feasible solution based on their objective
function values, while the penalty function is applied to
the infeasible solutions thus decreasing their fitness value.
A major disadvantage of the penalty method is the
adequate choice of the penalty factor, which largely
affects the efficiency of the search process. Penalty factor
primarily depends on the level of violation, and varies
accordingly.

k =

where

( )
W ( x ( ) )

W x ( k )
k 1

(18)

denotes the Euclidian norm. The key

advantage of the CG algorithm lies in the fact that the


descent direction in the current iteration is constructed
based on the gradient of the objective function in the
current and previous iteration. Thereby the convergence
speed of the algorithm can be improved with the careful
184

OTEH2016

OPTIMIZATIONOFPLANETARYGEARSANDEFECTSOFTHETHINRIMMEDGEARONFILLETSTRESS

control of the parameter k .If the solution in the previous


iteration leads closer towards the optimal solution
parameter k takes higher values, and the decent
direction in the previous iteration prevails in the search
process. However, if the solution leads away from the
optimum the gradient of the objective function in the
current iteration takes the crucial part in the construction
of the descend direction.

The process of metal annealing requires a precise control


of the temperature and the cooling rate which is specified
by the annealing schedule. From (20) is clear that as the
temperature decreases the probability of accepting worse
solutions decreases exponentially. Rapid cooling results
in the SA algorithm failing to explore entire solution
space and converge towards the global optimal
configuration. On the contrary, extremely slow cooling
will produce slow convergence and long execution time
of the SA algorithm. Therefore, to avoid such undesirable
effects the geometric cooling schedule is used, as the most
commonly used cooling schedule in SA literature,
described by the following temperature-update relation

2.3. Simulated annealing


Simulated annealing (SA) is a meta-heuristic technique
designed to mimic the annealing process in material
processing which is successfully applied to global
optimization problems. It is first presented by Kirkpatrick
et al. [11] for a combinatorial optimization problems as a
generalization of the Metropolis Monte Carlo integration
algorithm. The fundamental idea of the simulated
annealing algorithm is to use random search which not
only accepts changes that minimize the objective
function, but also retain some of the solutions that lead
away from the local minimum. The SA algorithm uses the
Boltzmann probability distribution to keep the worse
solutions, and avoid being trapped in local minima.

T ( k +1) = T ( k ) ,

where is a constant cooling factor such that ( 0,1) .

2.4. Termination criteria


The adequate choice of the termination criterion could
determine whether or not the optimization algorithm is
efficient and successful in the search of the optimal
solution. To terminate the iterative procedures of the
proposed optimization algorithms the following
termination criteria are used
The algorithm will run until a predetermined number
of iterations is reached and
The relative change in objective function value
between two consecutive iterations is less than the
predefined positive number, mathematically written

Algorithm starts from an initial solution vector x( 0) , with


an initial temperature parameter T ( 0 ) and the temperature
cooling factor . A candidate solution x c , is generated
randomly applying a small perturbation to the solution
vector x( 0) , and objective function values are evaluated.
Then, the objective function difference can be expressed
as:

( ) f (x

f = f x

( 0)

).

( )

(
W (x )

W x( k ) W x( k +1)
(k )

(19)

If the change in the objective function values between


current and candidate solution is positive, i.e. f > 0 , the
candidate solution improves the objective function, and
the current solution is replaced by the candidate solution.
Otherwise, the objective function difference is negative,
i.e. f < 0 and the candidate solution leads away from
local minimum. Therefore, the candidate solution x c is
accepted with the transition probability determined by:
p=e

f
kT

( f < 0 ) ( p > r )
otherwise

(23)

3. COMPUTATIONAL FE MODEL
There are major difficulties with the traditional method
used in gear design regarding the computation of the
relative deformations and stresses. The model for
determining the bending stress is derived from the
cantilever beam and presents an approximation to the
exact stress distribution within the gear tooth. To correct
the beam model additional semi-empirical stress
concentration factors are added to the model.

where k is the Boltzmanns constant and T is the


temperature parameter which controls the annealing
process. Such solution is accepted to explore other
potential regions of the solution space and converge
towards the global minima. Therefore, the solution for the
( i + 1) th iteration is accepted according to:
xc ,
x( i +1) = ( i )
x ,

) .

When the relation (23) is satisfied the search procedure


has reached the region of the optimal solution and further
iterations cannot lead to the significant improvement of
the objective function.

(20)

(22)

The model employed in this paper is based on the finite


element method to compute relative deformations and
stresses and provides an alternative to the traditional
method for the calculation of the bending stress. This
model does not require any empirical factors to compute
bending stress which can be predicted accurately, thus
allowing fully autonomous optimization process with a
high number of design variables as well as satisfying
complex constraints.

(21)

Where r is a uniform random number in the range of


[ 0,1] .

185

OTEH2016

OPTIMIZATIONOFPLANETARYGEARSANDEFECTSOFTHETHINRIMMEDGEARONFILLETSTRESS

The contact surfaces between the teeth of the pinion and


internal gear are discretized with a large number of finite
elements in order to accurately represent the complex
involute shape of the teeth profile. Moreover, the regions
of possible stress concentration, such as the trohoide arc,
are modeled by finer FE mesh.

different rim thickness to evaluate how this geometric


property may affect the bending fatigue failure and crack
propagation.
Table 1. Parameters of the planetary gear set used in this
paper (all dimensions are in mm , unless specified
differently)

The FE model for calculating stressstrain field at tooth


root of the planetary gear set used in this paper is shown
in Picture 1.

Sun

Planet

Internal

No. of teeth

34

18

70

Module

1.5

1.5

1.5

Pressure angle [ ]

21.3

Outer diameter

53.893

31.594

105.408

Root diameter

47.019

24.774

112.382

51

27

105

Base circle
Youngs
N/mm 2

modulus

Poisons ratio

2.07 105

0.3

Density kg/m3

7860

It is assumed that the critical working bending stress is at


the point where maximum equivalent stress of the FE
model is achieved.

Picture 1. FE computational model of the planetary gear


set used in the analysis

The proposed optimization methods, i.e. simulated


annealing and conjugate gradient optimization algorithms
are applied to the constrained thin rimed gear weight
minimization problem using the described penalty
method. Therefore the constrained optimization problem
is relaxed to unconstrained optimization problem using
the modified objective function, according to the (10).

The solutions of the FE model are obtained using Gauss


elimination nonlinear equation solver [12]. Due to the
high degrees of freedom of the FE model and contact
conditions between teeth profiles along the line of action,
the stiffness matrices of the system are often singular. To
avoid this numerical problem, the FE mesh is carefully
discretized in the regions of possible singularities. The
calculations of the FE model are performed assuming
homogeneous and isotropic material, without any
imperfections or damages.

The convergence of the optimum internal gear design


parameter values obtained using the proposed
optimization procedure based on SA and CG optimization
algorithms are presented in Picture 2. Moreover, the
variation of the objective function for the rim thickens
parameter for different number of evaluations using SA
and CG algorithms are shown in Picture 3.

4. SIMULATION RESULTS
In this section the results of the numerical simulation are
presented to verify the improvements in the optimal
design solution using the proposed SA based optimization
procedure compared to the results obtained by
conventional CG algorithm.

2.6
Simulated annealing
Conjugate gradient

2.4

Objective function value [kg]

2.2

The optimization procedure, that takes into


considerations the effects of the rim thickness on the
internal gear root fillet stress, is applied to an example
planetary gear set. The example planetary gear set, whose
corresponding FE model is given in Picture 1, consists of
floating sun gear and three equally spaced planetary
gears, with the internal gear rigidly connected to the
housing. The torque of Tin = 1500Nm is applied to the
input of the system. The basic design parameters of the
considered planetary gearbox are listed in Table 1.

2
1.8
1.6
1.4
1.2
1
0.8

The root fillet stress has been obtained from the FE model
of internal gear with the constant geometric properties and

10

15
20
Number of Iterations

25

30

35

Picture 2. FE computational model of the planetary gear


set used in the analysis
186

OPTIMIZATIONOFPLANETARYGEARSANDEFECTSOFTHETHINRIMMEDGEARONFILLETSTRESS

number of iterations, although the SA algorithm can


efficiently solve the optimization problem and produces
the solution with the minimal weight that satisfies the all
constrains. It is observed that the SA algorithm
outperforms gradient based algorithm, and achieves
global optimal design configuration.

14
Simulated annealing
Conjugate gradient

13
12

Rim thickness [mm]

11
10

Future work may include a multi-objective optimization


problem that includes additional techno-economic
objective function. Moreover, the design parameters of
the optimization problem can be further extended to
include additional parameters.

9
8
7
6
5
4

OTEH2016

References
0

10

15
20
Number of Iterations

25

30

35

[1] Bill,R.C.: Advanced Rotorcraft Transmission


Program, 46th Annual American Helicopter Society
Forum, (1990).
[2] Lancaster,O.E.: Jet propulsion engines, Princeton
University Press, 2015.
[3] Tong,B., Walton,D.: The optimisation of internal
gears", International Journal of Machine Tools and
Manufacture, 27(4) (1987) 491-504.
[4] Yokota,T., Taguchi,T., Gen,M.: A solution method
for optimal weight design problem of the gear using
genetic algorithms, Computers & industrial
engineering, 35(3) (1998) 523-526.
[5] Kramberger,J., Sraml,M., Potrc,I., Flasker,J.:
Numerical calculation of bending fatigue life of thinrim spur gears, Engineering fracture mechanics,
71(4) (2004) 647-656.
[6] Bodas,A., Kahraman,A.: Influence of carrier and
gear manufacturing errors on the static load sharing
behavior of planetary gear sets, JSME International
Journal Series C, 47(3) (2004) 908-915.
[7] Chandrasekaran,M.. Padmanabhan.S.: Single speed
gear box optimization using genetic algorithm",
ARPN Journal of Engineering and Applied Sciences,
10(13) (2015) 5506-5511.
[8] Wang,H., Wang,H.P.: Optimal engineering design of
spur gear sets, mechanism and Machine Theory, 29
(7) (1994) 1071-1080.
[9] Byrd,R.H., Lopez-Calva,G., Nocedal,J., A line search
exact penalty method using steering rules",
Mathematical Programming, 133(1) (2012) 39-73.
[10] Fletcher,R., Reeves,C.M.: Function minimization by
conjugate gradients, The computer journal, 7(2)
(1964) 149-154.
[11] Kirkpatrick,S.,
Gelatt,C.D.,
Vecchi,M.P.:
Optimization by simulated annealing, Science,
220(4598) (1983) 671-680.
[12] Hughes,T.J.: The finite element method: linear static
and dynamic finite element analysis, Courier
Corporation, New York, 2012.

Picture 3. FE computational model of the planetary gear


set used in the analysis

From Picture 2 it can be seen significant improvement in


optimal design solution that the SA based optimization
procedure achieves, which represents global optimal
solution with minimum mass design. The CG based
optimization procedure converges towards the local
solution therefore achieving worse optimal design
solution. However, concerning the convergence properties
of the algorithm, it is observed that the CG algorithm
converges towards optimal solution in significantly fewer
number of iterations, where the SA algorithm requires
three times more number of iterations to converge. This is
primarily due to the global search characteristics of the
SA algorithm, which requires more iterations to
thoroughly explore feasible domain and find a global
optimal solution.
Comparing the data presented on Picture 2 and Picture 3 it
is observed that as the rim thickness parameter changes,
the mass of the internal thin rimed gear changes
proportionally. This conclusion is in agreement with the
assumptions made in FE model, and the material
homogeneous and isotropic assumptions.
The observations made on the numerical simulations
show that the proposed thin rimed gear optimization
procedure based on SA algorithm can achieve better
design solutions, compared to the traditional gradient
based algorithms, however with the cost of higher number
of iterations.

5. CONCLUSION
In this paper, the optimization procedure of the thin
rimmed gear design in relations to the root fillet stress is
developed and analyzed, employing the FE model to
compute deformations and root fillet stress in the
optimization process.
The presented simulation results show that the CG
algorithm converges towards the solution in fewer

187

PROJECTION OF QUALITY A COMPLEX TECHNICAL SYSTEM


LJUBIA TANI
Project Management College, Belgrade, ljtancic@gmail.com
PETAR JOVANOVI
Project Management College, Belgrade
SAMED KAROVI
Project Management College, Belgrade

Abstract: The paper presents solution of task interior ballistic projection weapons for concrete caliber small arms
weapons. Optimal solution is looked in selection the best physical - chemical and ballistic characteristics powder, mass
powder, powder chamber and other characteristics with him is execution interior ballistics calculation on the computer.
Solution is compared with existence solutions and conclusions are given.
Keywords: projection small arms weapons, optimization.
Quality management project is one of the sub-processes
of the global project management concept. The main
objective of a particular project management, in addition
to minimizing wasted time, resources and costs, is the
completion of the project within the required or necessary
quality [6].

1. INTRODUCTION
The implementation of modern business and other
activities, ventures and projects is burdened with
extraordinary complexity and uncertainty, which are
caused primarily by an increasing and growing
complexity of the projects themselves and the
environment in which they work and the extremely rapid
pace of the development of science, technology and
civilization as a whole.

This paper discusses the design of complex technical


systems, and the design of the optimum solution to a
ballistic design of a complex technical system [7] and [8].
A specific caliber weapon as a complex technical system,
structurally composed of aggregates, components, subassemblies and parts and is functionally designed for
firing shots in small arms is described.

This complexity most often leads to serious problems in


the implementation of various ventures and projects
which is reflected in substantial delays and increased total
cost of implementation and ineffective implementation on
the whole [1] and [2].

Firing bullets is a complex thermodynamic and


gasdynamic process of an extremely fast, almost
immediate, converting the chemical energy of the powder
first into the heat and then into the kinetic energy of
powder gases that cause the movement of the missile
system - filling - tube movement [9] and [10].

In order to realize complex projects effectively various


management disciplines, methods and techniques are used
today. The best of them appears to be project
management, an operational management discipline that
enables effective management of the implementation of
various projects and programs [3].

For the illustration: The process of firing takes a few


hundredths of a second, the maximum pressure of powder
gases reaches a value of order 100 to 600 MPa, the
temperature of powder gases at the moment of creation is
2100-3800 K and in the moment of leaving the missile
tubes it is 1500 to 2000 K; the maximum velocity of the
projectile at the moment it leaves the pipe is 70 to 1500 m
/ s and the acceleration is of order 150000 to 600000 m / s
** 2.

For an efficient project management of complex technical


systems (military, industrial, construction and other
projects) it is necessary that special attention should be
paid to the management of risk and the quality of the
project as part of a comprehensive project management
[4], in addition to technical, time, cost constraints and
elements.
Risk management involves a set of management methods
and techniques used to reduce the possibility of realizing
unwanted and harmful events and consequences, thus
enhancing the ability to achieve the planned results [5].
So it is a set of methods that enable minimization of
losses and reduce the probability of losses as well as the
costs incurred by this reduction.

Order of magnitude indicates that this is a very dynamic


phenomenon of impuls character and that there is a high
risk in the theoretical and experimental phases of the
design.
The aim of this paper is to specify, based on the defined
required data such as muzzle velocity and maximum
188

OTEH2016

PROJECTIONOFQUALITYACOMPLEXTECHNICALSYSTEM

pressure of powder gases, only a certain number of


optimum ballistic parameters [8]: the most suitable model
and type of gunpowder, the mass of propellant and
propellant chamber volume based on the criteria of the
quality of the complex technical system and highlight
possible risks that may accompany the execution of this
complex project.

conservation of mass, momentum and energy using


Gauss-Ostrogradski transformation a mathematical model
is developed for an arbitrary moment of time during the
combustion of gunpowder. The process of firing is
observed from the time of the burning propellant behind
the projectile create enough pressure at which the sleeve
missiles will be etched into the grooves of pipes and start
up the missiles. It is assumed that at this time all the
initial and boundary conditions are known. After
completion of the powder combustion, the two-phase flow
becomes a single flow, the flow of a propellant gas. The
established mathematical model then becomes a classic
gas dynamic model [9].

2. BASICS OF BALISTIC PROJECT DESIGN


AND OF MATHEMATICAL MODEL OF
TWO PHASE FLOW
The process of firing in the complex chamber systems is a
gasdynamic process that is, in the space of the stationary
bottom of the tube and the moving projectile,
characterized by streaming of the two phases: solid
from burning gunpowder grains and gaseous from
powder gases as a propellant combustion product [10].
The ballistic design involves determining the optimum
basic structural characteristics of tube filling conditions
and energy characteristics of firing rules based on the
quality of the products that are put before a system is
described, and is defined by three main objectives [8]:

The system of equations is derived in Eulerian


coordinates t (time) and x (any position in the tube of the
breech face to the bottom of the projectile) [10], and then
transformed into a system of Lagrangeian coordinates t
and s (the mixture of gunpowder and propellant gases in
the individual points behind the projectile). The initial
assumptions and the overall performance, with additional
equations, initial and boundary conditions are given in
[12] and [13]; here, only the general form of partial
differential equations is described:

- the classic task of defining the barrel caliber, projectile


weight and muzzle velocity and determining the
characteristics of the channel tube, the terms of system
charging, the basic characteristics of charge and all the
data necessary for the calculation of pipe systems and the
projectiles;

Yk
+
t

k ,l

l =1

Yl
= bk
s

k =1, ... , m

(1)

where: m - number of equations


(m = 5 while powder combustion lasts and
m = 3 after the combustion is finished) and

the general task of adopting the maximum range, the


penetration on a given target range for subcaliber
projectiles and armor projectiles or the horizontal
distance and height of the target for anti-aircraft
systems, selecting the system caliber, the projectile
weight, the muzzle velocity and all the other
parameters, in the same way as the classical task does.
It is obvious that the classical task is part of the
generalized task. The general task generally consists of
two parts. The first part is the process of selecting an
optimal complex technical system, the other is the
classic problem of ballistic design.

ak,l - variable coefficients, some of which are equal zero.


A given system of equations is non-linear, nonhomogeneous and non-stationary, and is solved
numerically using the theory of finite differences.
Numerous numerical schemes can be set upon the system.
When choosing a scheme three conditions are to be me
[14]:
1. Condition of Consistency the scheme cannot have a
tendency to be a solution to any other systems
2. Condition of Convergence the errors in the
calculation must be limited

the simplified task of, compared to the classic, defining


some additional constraints. Thus, for example, a
complex technical system is designed to use either the
same ammunition as any already existing system or the
same pipe. This task is worked out in the same way as
the classic task, however, some of the parameters are
already given in advance and need not be analyzed.
This task involves the problem of modification of some
part of the existing complex system on the basis of
change in the internal ballistic parameters.

3. Condition of Stability the errors tend to zero value.


The condition of stability and convergence of numerical
schemes is developed [15,16] and a program for a
personal computer is devised that is used to analyze the
influence of the initial parameters [17].
Stability condition of the numerical scheme is defined by:

max 1

The basic elements of the internal ballistic design are the


pipe and the bullet. They are a necessary condition for the
process of firing bullet in the complex chamber system to
happen. The today's theory of war anticipates as large a
range of complex systems as possible, so there is a
tendency in the development to modify the existing or
construct complex technical systems to achieve a better
performance [11].

(2)

where is : max - largest eigenvalues matrix [A].


Let is the increase by time and h is the increase by s
and = rh , then value r is solved from the convergence
condition of the total system error, [15]:

Based on the initial assumptions and the law of


189

OTEH2016

PROJECTIONOFQUALITYACOMPLEXTECHNICALSYSTEM
m

Al =

ak ,l

k =1

rl ,i 1
Al i

r = min min ( rl ,i )

i l

The parameters evidently have different influence


gradient on results, so, for example, the unit grain burning
speed has the smallest percentage of change and the
highest influence on the results, while pressure engraving
has a reversed situation.

(3)

3. THE RESULTS OF THE FACTOR PLAN OF


EXPERIMENT 2n

After the influence parameters selection is finished, all


parameters which increase the results is taken and with
these parameters values the output limit is received (table
2.). The initial parameters, based on (QPI) have high and
low border allowances. After that, all initial parameters
which decrease exit results are taken and so the other limit
output is received.

The program solution to the theoretical-numerical model


can be tested with different values of the initial data
which influence the outputs differently. All initial
parameters must satisfy the Quality Product Instruction
QPI and the Instruction of Commission International
Permanente CIP [18] which define the allowed tolerances
for some parameters. However, initial parameters can
have higher and lower value as allowance values with the
same reliability.

Table 2. Initial parameters (maximum, medium and


minimum)

To estimate the initial parameters influence on the


mathematical model results and to execute the parameters
rank, the experiment factor plan 2n, based on the [19], is
carried out. The factor plan results are:

There are initial parameters which have not significant


influence on the model either alone or in combination
with other initial parameters. They are: conduct heat
coefficient (0), preliminary period time (t0), projectile
beginning position (X0), powder gases density at the
beginning of first period (0), dynamical viscosity
coefficient (), transition heat coefficient (b) and initial
powder grain temperature (Tb0).
There are initial parameters which have an essential
influence on the model either alone or in combination
with other initial parameters. They are: pressure
engraving (p0), projectile mass (m), powder mass (mb),
powder gases co-volume (), propellant grain area (Szo)
and unit grain burning speed (uzo).

1.

p0

2.

3.

m b0

4.

5.

Sz0

6.

u z0

+
+
+
+
+
+
-

Percent % pm, bar


20
20
1.56
1.56
1.55
1.55
1.46
1.46
1.3
1.3
1.3
1.3

3009
3009
3055
2964
3059
2959
3032
2990
3075
2945
3112
2906

V0, m/s
712
710
709.7
710
717
702
713
709
715
704.8
721
700

ULAP762.min

7.9 10 10

7.8 1010

m b0 ,kg

0.001645

0.00162

0.0015948

m, kg

0.0080232

0.0079

0.0077767

Sz0 , m 2

4.0733 10 07

4.127 1007

4.1806 1007

,m3 / kg

0.0009243

0.00091

0.00089769

p0 ,Pa

1.5 10

+07

1.25 10

p m ,Pa

333.8 10 +06

311 10 +06

279.5 10+06

V0 ,m / s

(+9.3%)
736.7 (+3.1%)

720.8

(-8.45%)
689.8 (-3.474%)

+07

1 10

+07

4. RESULTS OF CALCULATION OF TWOPHASE FLOW AND PROJECTION


BARREL
Model (chemical composition) and the type of powder
(shape and size) are usually selected on the basis of the
total pulse pressure of powder gases. If it is necessary to
design a new powder first, then the desired chemical
composition of gunpowder is defined. Based on the
chemical composition of the powder the thermo chemistry
of gunpowder [13] is defined, that is, the mathematical
model for calculating the specific characteristics of
gunpowder. Based on the chemical processes occurring
during the combustion of gunpowder, the relations of the
products of combustion (major and minor) are set as well
as the energy balance and the relations of these equations
are developed to perform the calculation of physical,
chemical and ballistic characteristics. For a selected
complex system of a particular caliber, based on the
analysis conducted in the literature [21] the gunpowder
NC - 08 is adopted, with all the physical - chemical and
ballistic characteristics relevant to internal ballistic
calculation.

Table 1. Initial parameters influence on calculations


results
Level

ULAP762.srd

8 1010

The execution of the program with the border values of


initial parameters yields the highest and lowest flow
characteristics values. Out of all the flow characteristics,
this paper describes and analyses only the powder gases
pressure and the projectile speed. If the mathematical
model is defined correctly, the experimental results must
be within or near those limits output calculations.

Table 1. shows the allowed tolerance measures in


percentages for parameters with significant influence on
the results for automatic rifle 7.62 mm and ammunition it
uses, based on QPI and CIP [18]. The parameters
influence upon the output is received by computer
program realization for experiment factor plan 2n in which
the relative connection criterion of exit and initial
parameters influence are defined.

Entrance
Parameter
rank

ULAP762.max

u z0 ,m / sPa

Exit
rank
6.
4.
3.
5.
2.
1.

190

OTEH2016

PROJECTIONOFQUALITYACOMPLEXTECHNICALSYSTEM

The calculations are performed on the basis of the


FORTRAN language program on the PC computer [13].
The paper does not offer the program solution but only
the calculation results with a certain comment presented
in Descartes coordinate system and in a form of family
curved lines. Picture 1. presents flow characteristic
propellant gas pressure (p) as functions of time (t) and
situation in the tube (x), the function of propellant gas
density is similar to that of pressure.

Risk management is somewhat different from other


management processes, as in the management of risk it is
difficult to exert the risk event control, but a prior
preparation and response to possible future risk events is
made, in order to reduce the probability of their
occurrence and to increase the probability of achieving
the expected results of the project.
Since the experiments were carried out on a real system
that captures a very dynamic character and appearance of
the pole to the probability of the risk to the greatest extent
possible a detailed analysis of risk factors (high-risk
event, risk probability and the size rolls) is conducted. For
a reliable risk management all the sub-processes such as
risk identification, risk analysis and risk assessment, risk
response planning, and control of the application of the
risk are carried out [24].
Analyzing risks in the project measurement and data
processing of complex technical systems in Table 3 we
conclude that there is a risk that has a critical importance
for the realization of the whole project. The event which
carries the greatest risk of contracting the project is to
supply the devices. Otherwise, the risk event causes other
risk events too. To prevent the occurrence of the risk we
have to reach the agreement with the supplier of
equipment to pay large fines for equipment delay and
oblige him to train the staff.

Picture 1. Pressure propellant gas


Obviously, the internal ballistic characteristics dependant
diagrams fully meet the goal defined in the introduction.
After this phase, the design follows the design of the inner
tube route with all the details such as rifling, transition
cone, gunpowder chamber, etc. After designing the inner
tube of the route the route of external design and
conceptual design of the bullet follows. The designing of
the external route pipes can be realized in classical
modern analytical methods and numerical methods [22]
and [23].

Measuring the propellant gas pressure in the small arms


barrel caliber 7.62 mm is executed with for the purpose of
comparing the calculation data and the real data in the
small arms. For the experiment execution, the trunk,
barrel, wrapping and pressure transducer piesoelectrical
measurer were provided. A measurement chain for
measuring pressure propellant gas by block scheme
(Picture 3.) is connected to the complex technical system
barrel.

Using a matrix to draw diagrams in the previous program


for engineering drawing and design "Katia" is modeled
conceptual design of pipes:

Picture 2. Modeled tube firing weapons 7.62 mm


Picture 3. Block scheme for data measuring and
processing

5. A COMPARATIVE ANALYSIS OF
EXPERIMENTAL DATA AND RESULTS OF
CALCULATION

MTP - piesoelectrical measure transformer pressure, CC coaxial cable, AS amplifier signal, RS - registrar signal
(measure tape recorder), DO - digital oscilloscope, C computer, Pl - plotter and Pr - printer

Project management includes the management of project


risk [4], in order to increase the probability of achieving
the desired objectives of the project and reduce the
possibilities of adverse events and adverse outcomes [5].

191

OTEH2016

PROJECTIONOFQUALITYACOMPLEXTECHNICALSYSTEM

model and type of powder, powder mass and volume of


the powder chamber. Then the design of the inner tube
route with all the details such as the cartridge chamber,
the transition cone and the missile lead is made. After
designing the inner tube of the route follows the route of
the design of the external tube and the conceptual design
of the bullet. This projected system, compared to the same
caliber specific system has optimum properties, which
justifies the application and verifies the chosen
calculation method.
This paper gives a theoretical - experimental analysis of
the process of discharging the tube complex systems
based on experimental research and numerical modeling
on a computer. The analysis revealed that the medium
terms of the calculation give the optimum input-output
parameters and acceptable results of the calculations for
the specific complex technical system [21].

Picture 4. Diagrams model and central experiments value

The comparative analysis included the pressure of powder


gases and projectile velocity in the complex system pipe
over time. The results of experiments and models confirm
the nature of the change of pressure and velocity, and the
basic assumption of the mathematical model that the
pressure of powder gases and projectile velocity function
of time and the projectile path. Comparing the
experimental and calculated results of the propellant gases
pressure and the projectile velocity their good mutual
compliance is recognized, which confirms the correctness
of the mathematical model.

Pressure curves on all the measure points and curves of


the model on the measure point 1 are presented as
maximum - (curve h - higher), medium -(curve m)
and minimum - (curve l - lower) obtained with the
initial parameters. The mathematical model and computer
program are corrected with the conditions for
experimental barrel and those calculations are executed
with such conditions, because the firing conditions in the
experimental barrel are different from the conditions in
the fighting barrel. Fig. 1. shows that medium
experimental results are in calculation results zone limited
with higher and lower curves. For this reason, at the end,
medium calculation results are compared with medium
experimental values for position 1.

References
[1] PMI PMBOK Guide, A Guide to the Project
Managment Body of Knowledge, 4th Edition,
Project Managment Institute, Newtown Square, The
USA, 2008.
[2] Kerzner,H.: Project Management, A Systems
Approach to Planning, Scheduling and Controlling,
10th ed. John Wiley&Sons, Inc. New Jersey, The
USA, 2009.
[3] Turner,J.R.: The handbook of project-based
managmentLeading strategic change in organizations, McGraw-Hill, The USA, 2009.
[4] Jovanovi,P.: Project Management, Faculty of
Organizational Sciences, Belgrade, 2006.
[5] Smith,P.: Merritt G.: Proactive Risk Management:
Controlling Uncertainty in Product Development,
Productivity Press, New York, 2004.
[6] Barkley,B.T.: Project Management in New Product
Development, New York, McGraw-Hill, 2008.
[7] Tancic,Lj.: Classic Interior Ballistics, Ministry of
Defence, Human Resources Sector, Military
Academy, Belgrade, 2006.
[8] Tancic,Lj.: Interior Ballistic Design, Ministry of
Defence, Human Resources Sector, Military
Academy, Belgrade, 2014.
[9] Jaramaz,S., Mickovic,D.: Internal Ballistics,
University of Belgrade, Faculty of Mechanical
Engineering, Belgrade, 2002.
Interior
ballistics,
Military
[10] Cvetkovic,M.:

The average experimental value for maximum pressure


(294.77 MPa) practically is identical with the model
(295.1 MPa) for 20 elements for variable s, and all
experimental curves are compatible with the calculations
so that the correctness of the given theory is validated.
The experimental results prove the character of pressure
in small-arms barrel as function of time and position in
the barrel.

6. CONCLUSION
This paper presents the design choice of the optimum
solution to the ballistic design complex systems for a
particular caliber. For an efficient project management in
the complex technical system implementation, special
attention is paid to risk and project quality management.
The probability of risk is reduced to a minimum by the
analysis of risk factors and a strategy is defined to reduce
and respond to the risk event. What specifically should be
done to minimize the impact of risk events is described in
detail.
The quality of the project is defined by the regulations of
permanent international commission on the quality of
products that define the allowed tolerance for all inputs of
crucial influence upon the outputs of specific complex
systems.
The main goal of the work is reflected in the choice of the
optimum ballistic parameters, primarily: the most suitable
192

OTEH2016

PROJECTIONOFQUALITYACOMPLEXTECHNICALSYSTEM

[11]

[12]

[13]

[14]

[15]

[16]

[17]

Technical Academy of Yugoslav Army, Belgrade,


1998.
Jovanovic,P., Tancic,Lj., Lajsic,Dj., Vukovic,M.:
Innovation Management and Project Management,
College of Project Management, Belgrade, 2012.
Cvetkovic,M.: Application insteady gas-dynamic on
the interior ballistics problem to small arms, Ph. D.
dissertation, High Military Technical School,
Zagreb, 1984.
Tancic,Lj.: Numerical computation of insteady
models in interior ballistics to small arms, Ph. D.
dissertation, Military Technical Academy of
Yugoslav Army, Belgrade, 1997.
Smith,G.D.: Numerical Solution of Partial
Differential Equations: Finite Difference Methods,
Oxford University Press, 1985.
Cvetkovic,M., Tancic,Lj.: A comparisons analysis
experimental and calculations results for two-phase
flow in the small arms, II International Symposium
Contemporary Problems of Fluid Mechanics,
University of Belgrade, Faculty of Mechanical
Engineering, Chair of Fluid Mechanics, Belgrade,
1996.
Cvetkovic,M., Tancic,Lj.: Analysis of conditions
numerical modeling of two-phase flow in small arms,
XXII Yugoslav congress of applied and theoretical
mechanics , JUMEH 97, Vrnjaka banja, 1997.
Cvetkovic,M., Tancic,Lj.: Initial parameters

[18]
[19]

[20]

[21]

[22]

[23]

[24]

193

influence on two phase flow model in the small arms


barrel, 6-th Symposium On Theoretical and Applied
Mechanics, Republic of Macedonia, Struga, 1998.
October 1-3
***, Commission Internationale Permanente (CIP),
Geneve, 1995.
Pantelic,I.: Introduction in theory engineer's
experiments, People's University - Radivoj Cirpanov,
Novi Sad, 1986.
Tancic,Lj.: The experimental research two phase
flow parametars in the small arms barrell, Review
Technical Research of Yugoslav Army, Belgrade,
1999., No 2, P. 3 -9
Cvetkovic,M., Tancic,Lj.: Two phase flow models
sensibility in small arms on initial parameters,
Review Technical Research of Yugoslav Army,
Belgrade, 1999., No 3, P. 3 -7
Tancic,Lj.: Optimization of internal ballistic
parameters at the design small arms, Second
Scientific Conference of the defense industry (I8I13), Belgrade, 2007.
Jovanovic,P., Tancic,Lj.: Project quality menagment
of complex interior ballistic systems, 5th International
Scientific Conference on Defensive Technologies,
OTEH 2012, Belgrade, 2012., 18-19 September
Jovanovic,P.: How to become a good manager, College
of Project Management, Belgrade, 2010.

STRESS ANALYSIS OF INTEGRATED 12.7 mm MACHINE GUN MOUNT


ALEKSANDAR KARI
Military Academy, University of Defense in Belgrade, aleksandarkari@gmail.com
DUAN JOVANOVI
Faculty of Engineering, University of Kragujevac, djovanovic.jovanovic7@gmail.com
DAMIR JERKOVI
Military Academy, University of Defense in Belgrade, damir.jerkovic@va.mod.gov.rs
NEBOJA HRISTOV
Military Academy, University of Defense in Belgrade, nebojsahristov@gmail.com

Abstract: The paper describes the problem of integration 12.7 millimeters machine gun on the mobile platform. Based on
the dimensions of the machine gun modeling of machine guns mount with cradle is done. The completed model is fully
functional and realistic. Using optimized internal ballistic parameters the calculations of recoil forces and loading of mount
and rotating bearing are executed. The loading calculation of bearing was made in two ways. In the first case finite element
method is applied and software package FEMAP was used. The second method is based on calculating the resistance
components of bearing from the equilibrium condition. At the end the comparative analysis of data obtained from these two
methods was done.
Keywords: integrated machine gun, stress analysis, recoil force, bearing.
characteristics that affect the efficiency of weapons,
primarily projectile velocity at the muzzle, and the required
tactical and technical characteristics of the machine guns
presented in Table 1 [2].

1. INTRODUCTION
The resistance of weapon and its firing stability depend on
the value of recoil forces [1]. In the case of the integration
of weapons, if the barrel is rigidly connected to the mount,
the recoil force is fully transmitted to the mount. In order to
avoid inconveniences of such connections, or extended
firing time load mount and reduce its intensity, barrel is
elastic connecting to the mount which enables the
movement of the barrel or whole weapon during firing in
the direction of the axis of the barrel.

Table 1. Tactical - technical characteristics of machine


gun M87
Characteristic
Caliber, d
Barrel length, l
Total length of machine guns, L
Weight of machine guns, m
Rate of fire, n
Effective range, Dmax

The study of conditions to be satisfied by stand of machine


gun to be installed on a mobile platform is the main goal of
this paper. This involved the construction of the rotating
mount that could withstand the loads created by recoil force,
as well as calculation of the forces that are transmitted on
the mobile platform. As an example, for the integration was
selected machine gun 12.7 mm M87.

Value
12,7 x 108 mm
1100 mm
1560 mm
24,8 kg
700 bullet/min.
1500 m

2. RECOIL FORCE
For the selection the most rational construction of mount, it
is necessary to define the acting forces on the mount as a
whole, but also on its individual parts. In the process of
firing of the machine gun, force of powder gases acting on
the bolt. After unlocking, on the bolt acts the force of the
return spring that opposes the force of the pressure powder
gases.

In order to reach the optimal solution, the conditions of


minimal force and minimal recoil mass of mount are set,
taking into account the boundary allowable conditions of
materials resistance, as well as, the conditions in tactical
use. The research was carried out through four main
phases:1) Optimization of internal ballistic parameters and
their influence on the recoil force of the machine gun; 2)
Determination of machine gun recoil force; 3) Designing the
appropriate mount based on the construction of the machine
gun; 4) Determination of the mount and bearing loading.

If the machine gun is on the mount and there is an elastic


connection between gun and mount, there is the resistive
force of recoil (force of shock absorber, buffer, hydraulic
brake, etc.). There is also a frictional force on the guide rails
of machine guns which opposes to movement of a machine
gun during recoil.

The goal is to minimize the pressure of powder gases in the


barrel, but with as less as possible reduction of
194

OTEH2016

STRESSANALYSISOFINTEGRATED12.7MMMACHINEGUNMOUNT

Finally, the total recoil force is:


JJJG JJJG JJJG JJJG JJG
Pkn = Fbg + Fop + Fbf + Ftr

(1)

where: Fbg - force of gas pressure, Fop - force of spring,


Fbf - force of buffer and Ftr - frictional force.

Activity of these forces is shown in Picture 1.

Picture 3. The optimal calculated velocity of the


projectile in the barrel
When the bottom of the projectile passed over the orifice,
gun powder gases enter the returnees chamber and acting on
the piston, creating a new force.

Picture 1. The forces acting on the weapon


The listed forces are parallel to the axis of the barrel, so that
the intensity of the total recoil force can be expressed as:

The pressure in the gas cylinder is calculated based on the


model by E. L. Bravin [5]:

Pkn ( x1 , x2 , t ) = Fbg ( t ) Fop ( x1 ) Fbf ( x2 ) Ftr ( x2 ) (2)

where: t time from the start firing, x1 movement of bolt


and x2 movement of machine gun during recoil.

pk ( t ) = p e

Some forces were calculated by applying the mathematical,


mechanical and ballistic methods, while others are measured
on real model of machine guns.

t
b

1 e b

(3)

where: p - pressure of propellant gas in the barrel at the


orifice for the evacuation of powder gases at the moment
when the projectile passes the orifice, t duration of action
of gas on the piston, coefficient of powder gases effect
and b - the ratio of unit impulse and pressure.

By internal ballistic calculation [3] the obtained maximum


pressure in the barrel of machine guns 12,7 mm 87 is pm=
328,98 Pa.

With all of the calculated values that exist in the expression


(3), a function of pressure changes in the gas cylinder can be
represented by:
pk = 120 106 e

t
0.001343

t
31.257

0.001343
1 e
.

(4)

The force of the bolt return spring acting after unlocking of


bolt. Change of the force of the bolt return spring,
depending on the bolt traveled distance x1, is obtained
experimentally:
Picture 2. The optimal internal ballistic curve of pressure
in the barrel

Fop = 157 + 396.08 x1 .

From a total of 15 parameters that are entered in the input


file to the internal ballistic calculation the three basic
ballistic characteristics of gunpowder are varied: co-volume
of powder gases, the specific energy of powder gases and
unit burning rate of gunpowder.

For the purpose of realization elastic connection of machine


guns and gun-mount is designed buffer. The experimentally
determined spring force of the buffer is:
Fbf = 760 + 102142.9 x2 .

The parameters are changed so that avoid the case of


incomplete combustion of gunpowder [4]. By varying these
parameters within the range of 10% and by selecting the
optimum value thereof, reduced the maximum pressure in
the barrel pm = 308.25 Pa, which represents a reduction of
the maximum pressure in the barrel of 6.3% (Picture 2). The
calculated velocity of the projectile at the muzzle is V0 =
841 m/s (Picture 3).

where x displacement of machine gun.


Total recoil force in the phases is shown in Picture 4.

195

(5)

(6)

OTEH2016

STRESSANALYSISOFINTEGRATED12.7MMMACHINEGUNMOUNT

Picture 4. The recoil force change in phases


Picture 6. Field stress of gun-mount (MPa)

3. LOADING OF DESIGNED GUN-MOUNT


AND BEARING
In order to calculate the forces acting on the bearing and on
the basis of technical documentation from the factory
"Zastava-Arms", by software CATIA V5 R18 model of
machine gun assembly with a cradle and gun-mount is made
(Picture 5). On the basis of this 3D model, the required
positions of the barrel axis and the mass of projected cradle
and the gun-mount are obtained [6].

a) in the x-direction

b) in the y-direction

c) in the z-direction
Picture 7. Loading of bearing in MPa
By simulating loading machine gun assembly, cradle and
gun-mount were obtained the following values of forces that
load bearing: Fx = 1949 N, Fy = 1238 N and Fz =3284 N [6].

Picture 5. Isometric view of the assembly machine guns


on the cradle and gun-mount

The calculation was performed taking into account the


following assumptions [6, 7]:

The effect of force of gravity is entered as a negative


acceleration acting on the whole body, while the effect of
the recoil force entered as the effect of surface pressure (the
bottom of the barrel). As constrains were adopted to the
lower surface of the bearing is fixed. Thanks to the
simplified geometry, finite element mesh can be generated
automatically. The final element is the shape of a
tetrahedron. After completed analysis, required stresses and
forces are reading. In Picture 6 field stress in MPa is
represented in isometric view of the assembly.

Mobile platform to which is mounted a machine gun is


located on a horizontal surface;
Distribution of additional vertical load on the ball of
bearing is sinusoidal;
Horizontal forces distributed per balls as on radial ball
bearings.
To determine the strain of roller tracks it is necessary to
determine the calculation load on the balls and the roller
track, and then check the strain on the surface pressure.

For the calculation of bearing is necessary to know the


intensity of the forces acting on the bearing. The stress of
bearing caused by recoil force and force of gravity. The
simulated effects of the forces on the bearing in the
direction of x, y and z axes are shown in Picture 7.

Picture 8 shows the forces that overload the rotating bearing


in the vertical plane. Markings used in the figure are the
following: Pkn recoil force, Fz vertical reaction of
bearing, G the total weight of the assembly machine gun
and gun-mount, D mean diameter of the bearing, G, G
polar coordinates of weight force, z, z polar coordinates
of vertical reaction [7].
196

OTEH2016

STRESSANALYSISOFINTEGRATED12.7MMMACHINEGUNMOUNT

Picture 9. Scheme of rotating bearing loading in the


horizontal plane x-y
Coordinates of resulting horizontal reactions Fh are
generally (D/2, h). Coordinates of reaction mechanism for
gun-mount rotation Fmk are designing determined (a = D/2,
mk = 45). Unknown parameters (Fh, Fmk and h) are
determined from equilibrium conditions:

Picture 8. Scheme of rotating bearing loading in the


vertical plane x-z
Vertical reaction of the bearing is obtained from the force
equilibrium condition (Picture 8):

= Fz G FR sin = 0 .

(7)

(9)

= 0 FR a cos Fmk D = 0
2

On the basis of system of equilibrium conditions equations


(eq. 9) and condition that the axis of machine guns at the
geometric center of the assembly, next parameters are
obtained:

Fmk =

M y = 0 , and

geometric sizes are taken from CAD model (b = 135.7 mm,


h = 609.2 mm, z = 28.747 mm).

2 FR a cos
=0
D

4a
4a 2
sin mk + 2 = 2292.14 N . (10)
D
D
2a cos mk
h = 0
tg h =
2a

D 1 sin mk
D

Fh = FR cos 1

Because a = 0 (axis of machine guns is in the plane of the


axis of rotation of bearing) and y = 0 (force of gravity is in
longitudinal plane of gun-mount), complete assembly is
symmetrical in relation to the plane x-z, angle coordinate is
G = 0. Then the polar coordinate z is:

G G cos G + FR b sin FR h cos


z =
.
G + FR sin

= 0 Fh sin h Fmk cos mk = 0

Polar coordinates of vertical reactions were determined from

Force FR in this case is the maximum force of buffer spring


(FR = 2292.14 N). For minimal elevation angle =0, value
of vertical reaction is minimal Fz min = 1358.7 N , whereas
for a maximum elevation angle =75 the value of vertical
reactions of bearing is the maximum Fz max = 3572.7 N .

M x = 0,

= 0 Fmk sin mk + Fh cos h FR cos = 0

where elevation angle, FR resistance force of recoil.

the momentum equation

The effect of calculated forces and loading of bearing shown


in Picture 10 [7].

(8)

For the most favorable case momentum loading of bearing


(=0), according to Picture 8, we get that z = 999 mm.
If |z| > D/2, this means that the load tends to overturns
assembly and must operate roll-over protection. In the
specific case (D = 415 mm) is obtained that the structure is
not stable in terms of rolling over, and it is necessary to use
the roll-over protection.
The effect of force on the bearing in the horizontal plane (xy) is shown in Picture 9. Marks used in the figure are the
following: Fh horizontal reaction of bearing, Fmk reaction
mechanism for rotation, h angle coordinate of horizontal
reaction and mk angle coordinate of reaction mechanism
for rotation [7].

Picture 10. The loading of rotating bearing

197

OTEH2016

STRESSANALYSISOFINTEGRATED12.7MMMACHINEGUNMOUNT

By comparing the values of reaction forces of rotating


bearing, obtained using these two methods, it can be seen
that they are very close and that in the vertical direction are
different by about 8%, and in the horizontal direction by
about 1%.

5. CONCLUSION
In this paper are compared results of two methodology for
loadings calculation of heavy machine guns mount.
The paper was performed calculation of machine gun 12.7
mm force recoil, discussed the impact of internal ballistic
parameters to it, as well as its effect on the assembly of
machine guns, the cradle and the gun-mount.

Regardless of which way to calculate reaction forces of


rotating bearing, they can further used for the calculation of
the elements of a machine gun automatic control, as well as
for calculations of mobile platform loading during firing.

Based on the design parameters of the machine guns, a gunmount was designed together with the cradle and the
rotating bearing. Modeling was done in the software
package CATIA V5R18 with the condition of minimum
recoil force and minimum allowable mass of gun-mount.
Since the selected machine gun is gas operated, apart from
the basic force of pressure of powder gases at the head of a
bolt which causes recoil of machine gun, should be taken
into account the force of pressure of powder gases on the
forehead of the piston in the gas cylinder. All other forces
that occur during firing, are the resistance forces of recoil
and have the opposite direction.

ACKNOWLEDGMENT
This work is part of the project III47029 in 2016, funded by
the Ministry of Education, Science and Technological
Development of Republic of Serbia.

References
[1] Ristic Z. Mechanic of Artillery Weapons. (In Serbian),
Odbrana Media Center, Belgrade, 2016. (in print)
[2] Babak F. K. Machine guns. (In Russian), Polygon,
Sankt Petersburg, 2005.
[3] Cvetkovic M. Internal Ballistic. (In Serbian), Military
Academy, Belgrade, 1998.
[4] Tancic LJ. Internal Ballistics Design. (In Serbian),
Odbrana Media Center, Belgrade, 2014.
[5] Petrovic M. Mechanics of Automatic Weapons. (In
Serbian), Odbrana Media Center, Belgrade, 2007.
[6] Chobitok V. A. Construction and Calculation of Tanks
and Infantry Combat Vehicles. (In Russian), Voennoe
izdatelstvo, Moscow, 1984.
[7] Jovanovic D. Integration of machine gun 12.7
millimeters on mobile platform master thesis, Faculty
of engineering, Kragujevac, 2015.

By internal ballistic calculation the maximum pressure in


the barrel of 12.7 mm machine gun which obtained is pm =
328.98 Pa. By varying appropriate ballistic parameters of
powder, an optimal solution is obtained (pm = 308.25 Pa)
that provided minimal recoil force, with the least possible
disruption of output values as initial velocity of the
projectile.
With such obtained recoil force, the calculation of load and
forces acting on the rotating bearing was made. The
calculation is done in two ways. The first used the method
finite element by program FEMAP. The most loaded parts
of the gun-mount are under stress from 19.43 to 38.83 MPa.
The second method is based on calculation of bearing
components load from the equilibrium condition.

198

STRATEGY IMPLEMENTATION OF DUAL-SEMI-ACTIVE RADAR


HOMING GUIDANCE WITH COUPLING OF TANDEM GUIDED AND
LEADING MISSILE OF AIR DEFENCE MISSILE SYSTEM ON REAL
MANEUVERING TARGET
MARKOVI STOJAN
University of defense, Military academy, Pavla Jurisica Sturma 33, 11000 Belgrade, Serbia, stomarkovic@yahoo.com
MILINOVI MOMILO
Faculty of Mechanical Engineering, University of Belgrade Kraljice Marije 16, 11120 Beograd 35, Sertbia,
mmilinovic@mas.bg.ac.rs
NENAD SAKAN
Institute of Physics, University of Belgrade, Pregrevica 118, Zemun, 11080 Belgrade, Serbia, nsakan972@gmail.com
Abstract: The paper investigated the possibilities of newly formed dual system SARH (Semi-active radar homing)
missiles SA-6 mid-range category, on standard manoeuvrable targets.The system was set up as a dual-cooperative
guidance of the first launched (guided-slave) missile from second launched missile (leading-master), with a time delay
of 3s. Angle position of target is measuring using two existing RHH, which together with additional illuminators on
missiles, represent two active seekers, with a period of mutual scanning and with changing the roles-status of 1s. Also,
missiles synchronously communicate between each other through free caudal antenna, when it is situation based on the
smaller current error, determines the status of a missile. Proces of coupled guidance is modeled with specially designed
program in MATLAB with appropriate adjustment AD, AL and effective radius of action Warhead. Missiles are guiding
in calculated point of overtaking with clasical method PN with changing the angle of launch SPLV, by changing the
distance, altitude and speed targets and and other given parameters of flight. Simulations were examined specific
conditions and regimes of shooting with one or two missiles, the correlation results and made conclusions of the most
efficient mode and optimal performances of dual system homing missiles AAD.
Keywords: slave and master missile guidance, control system simulation MATLAB, dual navigation missile, Semi Active
Radar Homing, Proportional Navigation, Miss distance missile-target, Radar lighter.
Disadvantage of this system is that it binds to the resource
on the ground and thereby prevents its engagement to
launched and guidance other missiles upon targets in AS
(Air-Space). Also, drawback of this system that he may
warn opponents, respectively aircraft as a target, that is
launched missiles on it and that the same guiding from the
ground. Semi-active RHH has potential possibility to
measure the relative direction to the missile as well as
angle speed LOS (Line of Sight) missile-target, but there
is no possibility of measurement distance to target In this
paper, missile has been analyzed with SARH way of
guidance which has two rear tail antenna, operable to
receive a facing direct signal illumination target, which
carries a reference frequency, and thus enabling the
measurement of Doppler frequency and estimate III MF
(Mixed Frequency) however, the change of distance to
target irrelevant and it has no need to be separately
measured by. This channel is desirable, and his role is
not to guide the missile, but to allow her to distinguish
target from the cluster and interference signal and CEW
(Countermeasure Electronic Warfare). There is a second
parallel channel, and that is angle grip and guidance upon
target through the front and face of the antenna, regulated
AOT (Automat Operation target) who later become the

1. INTRODUCTION
Classical artillery, with multi tools, relies upon prediction
trajectory of target in open loop control, with the help of
Optical or radar devices for control, tracking and action,
respectively fire control, primarily through LFT (Low
Flying Target). In order to achieve the required the
accuracy of the action, either by LFT as well as on the
medium altitudes targets, used in missile systems AAD
(Anti-Air Defence) that have automatic control closed
loop for continuous reading errors in order to provide
meeting missile with the target and its translation in
corrective
maneuver,
with
additional
required
acceleration. Designing law of guidance, that the distance
to the goal (ultimate current miss) decreases to the lowest
possible value, in real world resulting in a final breach by
objective, which compensates with the action of remote
lighter missile. Representative and reliable way of
guidance missile in terms middle range AAD is Semi
Active Radar Homing which provide that the source of
radiation missile, which illuminates the target, located on
the ground, both at the beginning and below, in the next
phase of guidance, provides at the end reliable work or
RHH (Radar Homing Head) missile.
199

STRATEGYIMPLEMENTATIONOFDUALSEMIACTIVERADARHOMINGGUIDANCEWITHCOUPLINGOFTANDEMGUIDEDAND

dominant in missile guidance, and can be stored with the


services of the other radar and optical devices (TOVTelevision Optical Viewfinder) in the AIR. Many years of
observation AAD system, with completely pros and cons
imposed the idea of specific research methods of fighting
with modern targets in AS, with decreases the intense of
probability of their hit and destruction, relative to the
initial set requirements. Otherwise, effectively reducing
and zeroing error homing missile, with increased range
and speed of target, his CEW and maneuver, intensively
thinkinging world in last ten years with the development
of modern theories of guidance and applying the most
modern mathematical and software tools.

PdualM1,M 2 = 1 /(1 Psin glM ) 2

OTEH2016

before launching missiles, which continues to fly ballistic


in POT (Point of OverTaking) with the accuracy of (+-4o),
and of itself doesnt see the target.
With the work off start phase of missile propulsion,
achieves the necessary speed of missile, and at the same
time, protection of other missiles and resources, with the
delay activation of RAMJET engine. After that
stabilization of missile occur and follow-up of its stable
and manageable flight, engagements, the already
calculated point of overtaking, but now with the classical
method PN (Proportional Navigation) with default AL
(Area Launch) with their borders. This means that the part
of radar-computer system on the ground did all the
procedures provided for
complex procedures (46
equations), actions and modes of work, from
observations, expectations, appearance and allocation
target in the launch zone, to give accurate coordinates and
creating conditions for launch. It is calculated and the
time entered reinforcement RL (Radar-Lighter) the 0.6s
distance till encounter with target with a given AD (Area
of Destruction) from near and far, upper and lower limit,
dictated the height of target, dead zone, its speed,
parameter target and type of aircraft as a target, figure 1.

(1)

Second phase of flight missiles is marching mode of work


when RAMJET propulsion engine is activated. Missile is
included in the loop of guidance, ate target is still
continuously illuminates CW illuminator with
1S91(RSSG-Radar Station Surveillance and Guidance),
and at same time monitors TR (Tracking Radar) ie pulse
radar. Necessary presence included continuous pulse
transmitter RG (Radar Guidance 1S31) represents, with
observation RS (Surveillance Radar 1S11) all the while
guiding missile to hit and destruction of the target, great
technical and tactical problem of hitting the target and
same time surviving crew and resources. It shows practice
and out dated system, which makes it inferior, in relation
to the modern offensive resources from AS Also,
possibility of hitting modern targets is drastically reduced,
and to achieve the required real norms its necessary a far
greater number consumption of missiles by one target
(from 18 to 22 missiles) in a short period of time, which is
economically unsustainable

Figure 1. Starting kinematic show dual-cooperative


homing two missiles on one target

With the work off start phase of missile propulsion,


achieves the necessary speed of missile, and at the same
and impossible in our conditions short period of time,
which is economically unsustainable and impossible in
our conditions. So, exclusion RG from system and
substation on missile (placing additional illuminator on
missiles) and modernization of the existing receivers
(digitalization, silent in SST-Solid State Techniques) in
the existing RHH, with innovated roles of tail antena.

Figure 2. Area of bursting fragments WARHED and


optimal position of antenna activation RL at the midline
One of the ways, or strategies in real application and
improving efficiency, probability of hitting and destruction
of target (manufacturers information: p1M= PsinglM = 0,80 and
p2M = 0,96) for MIG-19 with out considering CEW, of the
existing system of mide-range, will be analyzed and exposed
on the basis of the initial equation.

Missiles get divided but each other interchangeable roles


(guided - slave missile M1 - that excels 3s and she is
always in role that attack target and leading-master
missile M2-with illuminates target T and flight behind
guided M1 and support it) is the core cooperatively
coupled homing two missiles to one target, SARH-ARH
(Active Radar Homing), figure 4[2].

2. FORMULATION OF THE PROBLEM


OF DUAL HOMING MISSILE
The initial phase of flight AAMS (Anti-Aircraft Missile
System) is the command radio-guidance, whose
realization in system initially begins on the launcher,
200

STRATEGYIMPLEMENTATIONOFDUALSEMIACTIVERADARHOMINGGUIDANCEWITHCOUPLINGOFTANDEMGUIDEDAND

OTEH2016

Picture 3. Starting conception strategies simulation of dual homing AAD missiles on one target
relative to SNR for method ARH through the square of the
distance relationships RM1T and RM2T , eq 2 [1].

Figure 4. Proposal algorithm decision - making in tandem


homing missiles
Missiles are changing roles changing the specified status,
at defined intervals of time and thus obtaining SARH in
the air, with all know pros and cons. Algorithm applied in
strategy dual-coupled homing is shown in figure 4. It can
be seen that guided missile M1 travels in PO (Point of
Overtkong), optional guided missile M1, which
illuminates target, with criteria change of roles of
missiles. Algorithm decision, running parameters, which
are in accordance with the decisions state (missiles),
expressed as control functions of time in respective
sequences, processed in [2], from the time of operation
target to its cooperative destruction.

Figure 5. Matlab block scheme simulation


SNRSARH = SNRAHR / [( RM 1T / RM 2T ) 2 ]

(2)

Missiles fly based on criteria set current failures m (md miss distance) and achieving the lowest end-flops in
relation to the position and maneuver of the target in area.
Starting pictures kinematics, as the basis of simulation,
shown in figures 4 and 5.

Missiles are changing roles SARH - ARH with an interval of


1[s]. Their effectiveness is, in general, can be expressed
through SNR (Signal Noise Ratio) for method SARH
201

STRATEGYIMPLEMENTATIONOFDUALSEMIACTIVERADARHOMINGGUIDANCEWITHCOUPLINGOFTANDEMGUIDEDAND

Unfavorable position, with structural constraints and


performance of missiles (max 4s flight toward as per
predictable path) at one time for a second missile, that
was with big fault, becomes a priority. So, there is a
possibility of additional processing through GP
(Generator of Searching) in Homing Head, respectively
additional grip (if target was lost for flight in
remembered TP (Point of overtaking). Also, she has
enough time and benignity of movement, so that in the
area, guided missile its longitudinal axis and with angle of
beam transmitter RL, placed in the optimal position
parallel to the longitudinal axis of aircraft (decreases the
tilt radiation pattern in relation to longitudinal axis with
command changing antenna) for action remote radar
lighter and destruction of target, figure 6.

OTEH2016

autopilot and executive actuators of missile. At the


entrance, a module target formed and module of
comparing errors of missile M1 relative to M2 i module
of additional calculation, with respect to the effective
radius of action W (Warhead). System has possibility to
of giving the stroke duration (clock) roles of missiles
(guided, leading), with the necessary delay missile launch
M2 than 3 s. Matlab model simulation, picture 5, done on
the basis of kinematic relations of picture 1, includes:
model of gravity and shape of Earth, kinematics and law
of guidance, dynamics and 6DOF, disruption and settings,
aerodynamics (parameters and coefficients), missile
launcher, geometry of missile, radar lighter i warhead,
mass-propulsion-inertia characteristics, run the simulation
with default parameters, reading file, select a time
changes status of missile ending at the closest point of the
encounter and presentation of data and results (diagrams)
given simulation. In simulations, with solutions ballistic
flight, ARH mode and SARH mode, were used real
parameters of missiles: navigation constante [N=3.5],
timing constant HHM [Tg= 004s], memory Point of
Overtaking the Target (POT) from Generator Search (GS)
RHHM tmemory = [2,5-4s], radius of efficient action
warhead [refW = 25m], time reinforcement RL [tearmRL =
0,6DPOT], coefficient effectiveness radar lighter Kru>
0.90, fugure 6, side overload [ab=15g], longitudinal
overload [au=23g], loss of guiding until meeting with
starting maneuver of flight [tvgsmanevart = 6-8 s], clearance
failure of missile
[rpr= 3-20m].

Figure 6. Dependence Kru of speed targets (bomber,


fighter) and failures of AADS missile

4. SHOVING RESULTS MATLAB


SIMULATION AND RESEARCH
Based on the collected data in the real world, setting
model SARH as bistatic radar in the air (alternating shifts,
giving - active radiation and reception-semi- active)
obtained are the result of the launch of two tandem, the
two missiles (guided and leading) on real targets (strategic
bombers and attack fighter). Also, experiential data,
which provides original rule shooting system, were
amendments of the input data and parameters, on what
basis are made numerous simulations using the Matlab
2R2011a [4], whose Simulink model custom-built
analysis and set up for this type of missile, in figure 5.
The results are shown in Table 1. First, it provides a case
in shooting case of forced release of missiles with
SPLV-1,2.

Figure 7. Integral law probability failure missile SA-6 at


various speeds target less than 12m for a given probability
P

Then it examined individually shooting with one missile


on modern target flying to meet in standardized height
7km and then dual shooting on target which are flow with
zero radial velocity (helicopter) on 3km. Then engage
targets outgoing which is defined by the manufacturer in
the development system (strategic bombers and attack
fighter MIG-19).

3. REALIZATION OF SIMULATION
TANDEM DUAL COUPLED MISSILES
Conceptual scheme of simulation coupled guidance
tandem missiles AAD, in software program MATLAB,
has shown in picture 5. We see that they are
mathematically coupled two module 6.DOF missile,

202

STRATEGYIMPLEMENTATIONOFDUALSEMIACTIVERADARHOMINGGUIDANCEWITHCOUPLINGOFTANDEMGUIDEDAND

OTEH2016

Table 1. Showing results of Matlab simulation for default parameters of target and and avariety of shooting modes and
launching missiles AADS

It is taken case of shooting in the autonomous mode


2K12M and centralized (K-1M) management of fire from
the upper-level. In the end results were compared for
various modes of launching missiles (shooting with one
missile and shooting with two coupled missiles, on
different specific altitudes of 0.01-1km, 3-7-12km, and set
speed targets in and check-out) as well as other
parameters of the target (helicopter, drone, bomber, attack
fighter, modern fighter- to meet and outgoing), testing
time shift status missiles M1 and M2 and its impact on the
ultimate failure, table 1. For that purpose given algorithm
was used in pictures 3 and 4 for autonomous launching
two coupled missiles with limited time simulation

propulsion missile, with allow overload and parameters of


target, etc. Used as real technical parameters of geometry
missiles, its aerodynamic parameters, blend together in
two coupled 6DOF dynamics model and and the
kinematics associated with the module objective (uniform
movement or maneuver as sinusoidal movement, or by
changing the position and target vectors). In table 1, has
shown results of simulation, for various modes of
launching and variable of parameter target (elevation,
distance, altitude, speed, maneuver, maneuver frequency,
parameter). In the analysis was tested changed duration
status of missile M1 and M2. For specific targets,
criterion manufactures SA-6 missile system they were:
203

STRATEGYIMPLEMENTATIONOFDUALSEMIACTIVERADARHOMINGGUIDANCEWITHCOUPLINGOFTANDEMGUIDEDAND

OTEH2016

Table 2. Research optimal time simulation shift missiles


M1,M2

Figure 8. Tipical engagement geometry A-M

Figure 9.
strategic bombers and attack fighter, serial number 8. and tandem rocket launching, the best solution for LF targets,
9. Table 1, was adopted as the optimal value time duration the distance from the launch 9km, height 3km, target
status of missile of t =1s.
speed 1.2Ma, side overload of 8g and with parameter
0km. The ultimate failure of the rocket is upgraded from
Appearance representative diagram of radiation, with two
48.80m to 11.83m, which is within the limits of efficient
SPLV and tandem dual coupled AAD missiles, shown as
operations radar lighters, and it is 412%, in relation to
a variable kinematic size and trajectory of target and
launch of a single missile on the same target. Due to the
missile, figure 9. Given failures PAAD missile, very well
requirements set SA-6, which date from 55 years ago, use
agree with a given dependence Kru = f(r[m]), figure 6,
of dual launching of two missiles represent a good basis
referred to in [3], the original instructions rules producer
for observation and modification SAM in mid-range
system. Pictures 1,2,3 show (a-l) the end results and given
category. Further directions of research should be directed
failures. Comparing the results, with the results of the
on expansion mutually coupled guidance on modern
launch of a single missile on same target, result in an
target with complex maneuver, improving the algorithm
improvement of about 3.9 -8.02 %.
by the expanded method (PPN) and salvo launch of
homing missiles SAM on one target.
5. CONCLUSION

References

The obtained representative results show that the strategy


of dual SARH, coupled guiding and leading missile, with
optimal interval of time changed status of 1s improved
around 8.02%, for almost all targets and modes of
shooting on average. Thereby were analyzed strict
requirements, dictated by modern high performance
targets (attack fighters) with curvilinear maneuver and in
encounter and in leaving phase. Realistic performance of
SA-6 were taken into account with from near and far
boundary zone of launches and destruction, on various at
different altitudes. It turned out that the results of the dual

[1] Mullins,J.: AMRDEC, Radar Technology-Radar


Guidance Techniques, AIAA Symposium Huntsville,
Alabama, 2005,
[2] Markovic,S., Milinovic,M., Sakan,N.: Test simulator
for the discrete multi-parametric decision flight
system, OTEH-2014, Beograd, VTI.
[3] UARJ PVO, SSRP KUB-M - Teorija i Pravilo
gaanja, ubenik -122, VIZ Beograd, 1977.
[4] MATLAB 2R2011a, Aerospace-modul, Simulink.
204

EXPERIMENTAL INVESTIGATION OF OILS IN FOUR-STROKE


ENGINES
SRETEN PERI
University of Defence, Military Academy, Pavla Juriia turma Str. No. 33, 11000 Belgrade, sretenperic@yahoo.com
BOGDAN NEDI
University of Kragujevac, Faculty of Engineering, 6 Sestre Janjic Street, 34 000 Kragujevac, nedic@kg.ac.rs

Abstract: Confirming the basic causes of failures and their elimination, control of certain phenomena, is defining
proactive maintenance, as a new method that reduces maintenance costs and prolongs the life of assets. Determination
of tribomehanical systems condition has very important role in development of friction theory and practice, wear and
lubrication. There are used today different physical and chemical methods and tribology methods for tribomehanical
system diagnosis. Experience in technical systems exploitation shoved that the most effective failure prognosis is
according to parameters, particles created as result of wear, which are reliable indicators of wear. Analysis of oil
samples which contain particles, created as results of wear, enable evaluation of system tribology condition in different
phases of system exploitation.
The paper presents the physical chemical tests in the analysis of oils that are used for the assessment of his condition.
Furthermore, the results of experimental research on the physical chemical characteristics of the oil sampled from
engines of the vehicles (PUCH 300GD, PINZGAUER 710M and IK 104) are shown. All of these vehicles were in
regular use by the Serbian armed forces.
The performed research has revealed some significant changes in the physical chemical characteristics of oil for engine
lubrication. These changes directly depend on the condition of the entire engine elements, i.e. depend on their
functional characteristics. The research results are originating from the research of the paper authors.
Keywords: Monitoring, Analysis oils, Lubrication systems, Proactive maintenance, Diagnosis.
detergents and dispersants, preventing
antioxidants, preventing corrosion inhibitors.

1. INTRODUCTION
Using Oil Analysis programs for engine oils has several
benefits: reduction of unscheduled vehicle downtime,
improvement of vehicle reliability, help in organizing
effectiveness of maintenance schedules, extension of
engine life, optimization of oil change intervals and
reduction of cost of vehicle maintenance.

oxidation-

To know analytical properties of lubricants is the base to


make a decision in development, production and
application of lubricants. The lubricant classifications and
approved
system
specify
many
performance
characteristics and analytical tests. The analytical tests are
classical and instrumental. Instrumental technical have the
advantages in small quantity of the sample and rapid
analyze. As a part of the common proactive strategy of
the hydraulic systems maintenance, concept of on-line
monitoring is introduce in practice, recently [2-6,9]. It is a
combination of the measurement procedures, by which
sample of fluid is to be analyzed is taken directly from the
system and the results of the measurements are
continuously. On-line monitoring considering, first of all,
control of cleanliness classes (according to ISO, NAS,
SAE), control of humidity, viscosity, permittivity (acid),
temperature

In application, oils change their properties through [1]:


contamination by combustion products and metal wear
particles, consumption of additives which is chemical and
bears impact on important oil functions and base oil
oxidation.
The primary role of engine oil is the lubrication of
moving engine parts and reducing friction and wear of
metal surfaces which provides the good engine
performance and its long life. In order to provide a
defined quality of engine oils during production and for
final products to meet the product specifications we need
to know the physical chemical characteristics of engine
oils.

The following tests are the most used in condition


monitoring: Viscosity, Total Acid Number (TAN), Total
Base Number, Water, Spectrometric analysis and Particle
Count.

Certain physical-chemical characteristics which are


significant for the quality of engine oils are achieved by
adding additives to base oils. The most frequent additives
are for: improving of viscosity index-improvers, reducing
pour point depressants, maintaining engine cleanness

Viscosity is the resistance of a fluid to flow and the most


important lubricant physical property. The fluid is placed
in a "viscometer" (a calibrated capillary tube for precise
205

OTEH2016

EXPERIMENTALINVESTIGATIONOFOILSINFOURSTROKEENGINES

flow measurement between two pre-marked points on the


tube) and pre-heated to a given temperature in a "viscosity
bath" (which is usually oil-filled). After the oil reaches
the desired viscosity temperature, gravity-influenced flow
of the oil is initiated in the viscometer and timed between
two calibrated points. This time becomes the determinant
for the result.

of engine condition monitoring by oil analysis. This part


presents the results of experimental research of physicchemical characteristics of engines oil which was sampled
from engines of PUCH 300GD, Pinzgauer 710 and
IKARBUS IK 104P vehicles [10], [11].
The research was carried out in two vehicles PUCH
300GD (PUCH-1, PUCH-2), two vehicles PINZGAUER
710M (PINZ-1, PINZ-2) and two vehicles IKARBUS IK
104P (IK104P-1, IK104P-2).

Total Acid Number (TAN) is a neutralization number


intended for measuring all acidic and acid-acting
materials in the lubricant, including strong and weak
acids. It is a titration method designed to indicate the
relative acidity in a lubricant. The TAN is calculated from
the amount of KOH consumed. The acid number is used
as a guide to follow the oxidative degeneration of oil in
service.

The research was conducted through periodic sampling


oil from engine vehicles listed above. Apart from the
fresh oil (zero sample), samples are taken after 1.000
km, 2.000 km, 3.000 km, 4.000 km and 5.000 km for
vehicles.
The physical-chemical characteristics of oil in accordance
with standard methods are examined, shown in Table 1.

Total Base Number (TBN) is a neutralization number


intended for measuring all basic (alkaline) materials in the
lube (acid-neutralizing components in the lubricant
additive package). The converse of the TAN, this titration
is used to determine the reserve alkalinity of a lubricant.
The TBN is highest when oil is new and decreases with
its use. Low TBN normally indicates that the oil has
reached the end of its useful life.

Table 1. Implemented tests and methods for examining


the physic-chemical characteristics of oil
Characteristic
Kinematic viscosity, mm2/s
Viscosity Index
Flash Point (C)
Pour Point (C)
Water Content, mas.%
Total Base Number (TBN),
mgKOH/g
Insoluble substances in
pentane, %
Insoluble substances in
benzene, %
Fe Content, %
Cu Content, %

Water can be detected visually if gross contamination is


present. Excessive water in a system destroys a lubricant's
ability to separate opposing moving parts, allowing severe
wear to occur with resulting high frictional heat. There are
several methods used for testing the moisture
contamination (crackle, FT-IR water, centrifuge, Karl
Fischer) each with a different level of detection (1000
ppm or 0.1 % for first three methods and 10 ppm or 0.001
% for Karl Fischer method).
Spectrometric analysis is a technique for detecting and
quantifying metallic particulates in used oil arising from
wear, contamination and additive packages. The oil
sample is energized to make each element emit or absorb
a quantifiable amount of energy, which indicates the
elements concentration in the oil. The results represent
the concentration of all dissolved metals and particles.
The equipment for spectrometric analysis is the standard
equipment for oil analysis laboratories today. It provides
information on technical system, contamination and wears
condition relatively quickly and accurately. Spectroscopy
is more-or-less blind to the larger particles in an oil
sample, more precisely, to particles greater than 10 m in
diameter, which are more indicative of an abnormal wear
mode [7].

Method
SRPS B.H8.022
SRPS B.H8.024
ISO 2592, ASTM D 92
ISO 3016
ASTM D 95
ASTM D 2896
ASTM D 893
ASTM D 4055
ASS
ASS

The analysis was done on the fresh (new) oils and oils
that are used in the engines of vehicles. During the
sampling of oil choice of the sampling were conducted
carefully according to the actual oil usage, which enabled
each sample as representative one.
The wear mechanism of a tribological lubrication system
consists in the wear of contact surfaces, and lubricant
consumption. If there is wear of the contact surfaces,
there are wear particles present.
Regardless of the availability of numerous methods for
diagnosing the physic-chemical changes of lubricants, in
order to create a true picture of the condition of lubricants
from the user system, it is of importance to satisfy the
precondition of the possibility to obtain a representative
sample. That is why it is extremely important to take the
sample in a proper way.

Particle Count is a method used to count and classify


particulate in a fluid according to accepted size ranges,
usually to ISO 4406 and NAS 1638 [8]. There are several
different types of instrumentation on the market, utilizing
a variety of measurement mechanisms, from optical laser
counters to pore blockage monitors.

Allowable values of deviation limits of individual


characteristics of the oil are conditioned by the type of oil,
working conditions and internal recommendations of the
manufacturer of lubricants and users. Limited value
characteristics of oils that condition the change of oil
charging from engine are given in Table 2. They represent

2. THE RESULTS OF OIL ANALYSIS AND

DISCUSSION
In this part are presented the results of oil analysis
examination during application in four-stroke engines by
physic-chemical methods in order to evaluate possibilities
206

OTEH2016

EXPERIMENTALINVESTIGATIONOFOILSINFOURSTROKEENGINES

Table 5. The results of testing samples of used oil from


engines examined vehicles [10]

the criteria for the change of oil charge. Deviation of only


one source changes characteristics of oil charge, no matter
of what a characteristic is about.

Sample

Viscosity at
100C, mm2/s

100 ppm
50 ppm

Flash
Point,C

Used engine oil in examined vehicles are shown in Table


3. Characteristics of zero samples of engine oil are shown
in Table 4, and the results used oil samples in Table 5.

Index

Viscosity at 40 C and 100C,mm2/s


Viscosity Index, %
Total Base Number (TBN), mg
KOH/gr
Flash Point, C
Water Content, %
Products wear Content Fe,
ppm(g/gr)
Products wear Content Cu,
ppm(g/gr)

Max. allowed
variation
Engine oil
20 %
5%
The fall to 50
%
20 %
0,2 %

Viscosity

Physical-chemical characteristics oil


and products wear

Viscosity at
40C, mm2/s

Table 2. Allowed values deviation of physico-chemical


characteristics of new and used oil

Table 3. Used engine oil in examined vehicles [10]

Cu Content
(ppm)

Fe Content
(ppm)

TBN,
mgKOH/g

Engine oil from engine of PUCH 300 GD vehicles


SAE
API
Manufacturer
classification
classification
SAE 15W-40
API SG/CE
FAM Krusevac
Engine oil from engine of PINZGAUER 710 M vehicles
SAE
API
Manufacturer
classification
classification
SAE 30/S3
GALAX Beograd
Engine oil from engine of IKARBUS 104 P vehicles
SAE
API
Manufacturer
classification
classification
SAE 15W-40
API SG/CE
FAM Krusevac
Table 4. Results of zero samples of oil from the engine
[10]
Characteristic
Color
Density, gr/cm3
Viscosity at 40 C,
mm2/s
Viscosity at 100
C, mm2/s
Viscosity Index
Flash Point, C
TBN,
mg KOH/g

104,63

14,12

11,67

230

240

10,5

9,8

PUCH
-2
104,8
110,4
111,8
113,8
115,9
127,5
14,12
14,23
15,03
15,65
16,12
17,03
135
131
126
123
120
115
230
215
210
204
202
188
10,5
9,4
8,9
8,7
8,1
7,6
27,4
59,8
71,2
71,4
86,8
27,4
2
3,4
3,7
3,9
5,4
2

IK104
-1
104,8
96,9
96,2
92,3
90,8
90,2
14,12
13,74
12,86
12,46
12,32
12,29
135
132
130
125
122
119
230
217
214
213
210
189
10,5
8,8
8,7
8,4
7,9
7,3
30,1
32,5
35,6
37,5
38,5
30,1
1,5
1,9
3,2
4,4
4,9
1,5

IK104
-2
104,8
104,4
101,9
97,1
94,8
93,1
14,12
13,62
13,55
13,25
12,95
12,61
135
133
131
127
124
121
230
212
210
202
193
184
10,5
8,1
7,7
7,2
6,8
6,4
20,5
46,3
57,6
62,8
69,6
20,5
3,2
5,1
6,3
7,7
9,1
3,2

PINZ
-1
104,6
100,4
94,4
86,3
79,1
75,9
11,67
10,9
10,3
9,96
9,37
8,74
100
96
93
89
84
82
240
196
186
168
154
136
9,8
9,6
9,1
8,3
7,6
7,1
19
19,8
38,3
54,3
105,4
19
3,5
4,1
5,3
6,9
8,7
3,5

PINZ
-2
104,6
100,9
96,1
88,6
82,2
76,9
11,67
10,55
10,4
10,15
9,64
9,07
100
97
95
91
87
84
240
193
177
159
143
128
9,8
9,4
8,4
7,8
6,6
6,2
17,9
40,9
86,7
132,8
261
17,9
3,3
3,8
6
8,1
9,7
3,3

The viscosity index is an empirical number which shows


how the viscosity of some oils changes by increasing or
reducing the temperature. High viscosity index shows
relatively small tendency of viscosity to change upon
influence of certain temperature, as oppose of low
viscosity index which shows greater viscosity change
with temperature.

Type of motor oil


FAM
Galax
SAE 15W-40
SAE 30/S3
3,0
3,0
0,881
0,902
104,81

0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5
0
1
2
3
4
5

PUCH
-1
104,8
111,0
113,5
119,4
126,4
132,7
14,12
14,64
15,47
16,05
16,63
17,56
135
129
122
119
116
112
230
220
208
205
197
192
10,5
9,1
7,2
6,5
6,1
5,2
98,4
123
137,1
149,4
165,3
98,4
4,9
5,9
6,7
7,3
7,9
4,9

During the exploitation it is desired that the viscosity


changes as lesser as possible with the change of
temperature. If during work temperature modes are
changeable and cause major changes of viscosity that may
cause disruptions in the functioning of the system, which
is a manifestation of increased friction, wear and damage.
Change of engine oil Viscosity Index is shown in the
Picture 1. The decrease in the Viscosity Index oil is evident
for all vehicles, exceeding the limit of 5 % (Table 2).

207

OTEH2016

EXPERIMENTALINVESTIGATIONOFOILSINFOURSTROKEENGINES

Picture 2 shows the changes viscosity at 40 C engine oils


during exploitation.

Index viscosity "0" sample:


135 (SAE 15W-40)
100 (SAE 30)

140

PUCH1
PUCH2

135

PINZ1

125

Max allowed increase 20%


125,7 (SAE 15W-40 )

120
Max. allowed decrease 5%
128,25
( SAE 15W-40)

110

Viscosity at 40C, mm 2/s

Viscosity Index

130

PINZ2

100
IK104P1
90
80

Max. allowed decrease 5 %


95 (SAE 30)
0

1000

2000

3000

IK104P2
4000

5000

PUCH1
PUCH2

115
Viscosity of "0" sample
104,8 (SAE 15W-40)
104,6 (SAE 30)

105

PINZ1
PINZ2

95
IK104P1
85

Kilometrage, km
75
0

Picture 1. The change of Viscosity Index [10]

Max allowed decrease


20% 83,8 (SAE 30)
1000
2000
3000

IK104P2
4000

5000

Kilometrage, km

The most important engine oils characteristic is the


viscosity defined as a measure of inner friction which
works as a resistance to the change of molecule positions
in fluid flows when they are under the impact of shear
force, or in other words, it is the resistance of fluid
particles to shear.

Picture 2. The change of viscosity at 40 C [10]


The increase viscosity at 40 oC engine oil is evident for
PUCH-1 and PUCH-2 vehicles, exceeding the limit of 20
%. The decrease viscosity at 40 oC engine oil is evident
for PINZ (exceeding the limit of 20 %) and IK104P
vehicles, Picture 3.

The viscosity is a changeable category and it depends on


the change of temperature and pressure. A higher
temperature reduces the viscosity and makes a fluid
thinner.

Viscosity "0" sample:


14,12 (SAE 15W-40)
11,67 (SAE 30)

20

Viscosity at 100C, mm 2/s

Multigrade engine oils among numerous additives always


contain also viscosity index improvers. These additives
are special types of polymers, which in small
concentration significantly improve engine oils
rheological properties, especially viscosity and viscosity
index.
However, during engine oils utilization, degradation of
viscosity index improvers i.e. Break down of polymeric
molecules occurs. It results in reduction of their molecular
weight what leads to viscosity loss and oil film thickness
decrease, which causes undesirable phenomena of friction
and wear.

Max. allowed increase 20%


17,04 (SAE 15W-40)

18

PUCH1
PUCH2

16
Max. allowed decrease 20%
11,29 (SAE 15W-40)

14

PINZ1
PINZ2

12

Max. allowed decrease 20%


9,33 (SAE 30 )

IK104P1

10
IK104P2
8
0

1000

2000

3000

4000

5000

Kilometrage, km

Picture 3. The change of viscosity at 100C [10]

Reasons for the increase of viscosity lubricants are as


follows: oxidation of lubricants, cavitations due to
foaming lubricants, dissolution of lubricants with water,
pouring and charging system viscosity fat greater than
recommended and contamination of solid particles and
products wear lubricants.

TBN of "0" sample


10,5 (SAE 15W-40)
9,8 (SAE 30)

11

Total Base Number, mg KOH/g

PUCH1
Max. allowed decrease 50%
5,25
( SAE 15W-40)

10

The reasons for the reduction of lubricants viscosity are:


lubricants contamination of fuel (for motor oil), shearing
additive for reclamation viscosity, drop point of flash,
grinding molecules, lubricants contamination without
solubility with water, pouring and charging system
viscosity less fat than recommended, and the impact of
liquid cooling. Also, the causes may be high temperature,
load, uncontrolled long interval use, insufficient amount
of oil in the oil system, inefficient cooling systems and
the like.

PUCH2

9
PINZ1
8
PINZ2
7
IK104P1
6

Max. allowed decrease 50%


4,9 (SAE 30)

IK104P2

5
0

1000

2000

3000

4000

5000

Kilometrage, km

Picture 4. The change of TBN [10]

As expected, kinematic viscosity usually decreases in


time due to fuel penetration, or in well maintained
engines, there occurs a slight increase as a result of the
increase of the oil insoluble, without fuel penetration.

TBN is a neutralization number intended for measuring


all basic (alkaline) materials in the lube (acid-neutralizing
components in the lubricant additive package). The TBN
208

OTEH2016

EXPERIMENTALINVESTIGATIONOFOILSINFOURSTROKEENGINES

is generally accepted as an indicator of the ability of the


oil to neutralize harmful acidic byproducts of engine
combustion. The TBN is highest when oil is new and
decreases with its use. Low TBN normally indicates that
the oil has reached the end of its useful life. TBN is a
measure of the lubricant's alkaline reserve, and mostly
applies to motor lubricants. If a lube contains no alkaline
additives, there is little use to determine a TBN, as there
will likely be none. Combustion acids attack TBN, e.g.,
sulfuric acid, decreasing as it consumes.

(100 ppm, Table 2) for PUCH-1 and PINZ-2 vehicles.


Content of cooper is significantly below the allowable
limits (50 ppm, Table 2) for all vehicles.
300
Max content of Fe 100 ppm

Content of Fe, ppm

Figure 4 shows the changes of total base number (TBN)


engine oils. The decrease TBN engine oil is evident for all
vehicles. Until 5.000 km TBN value does not exceed the
allowed limit, except for PUCH-1 vehicle.

120
0

1000

2000

3000

4000

3000

4000

5000

12
PUCH1

Max . content of Cu 50 ppm


10

PUCH2
8
PINZ1
6
PINZ2
4
IK104P1

PINZ1

PINZ2

IK104P2
0

1000

2000

3000

4000

5000

Kilometrage, km

IK104P1

Max. allowed decrease 20%


192 (SAE 30)
Max. allowed decrease 20%
184 (SAE 15W-40)

2000

Picture 6. The change of content Fe [10]

160
140

1000

Kilometrage, km

Content of Cu, ppm

Flash point, C

180

IK 104P1

PUCH2

200

PINZ2
100

IK104P2

PUCH1

220

PINZ1
150

Picture 5 shows the change of flash point for engine oils.


The decrease in the flash point is noticeable, and by the
end of exploitation testing exceeds the allowed limits (20
%, Table 2) for PINZ vehicles.
Flash point "0" sample:
230 (SAE 15W-40)
240 (SAE 30)

PUCH2
200

50

Flash point represents data that shows what temperature


leads to open fire ignition by the steam created by oil
heating. In engine oil analysis the flash point determines
the presence of fuel oil, which is a consequence of poor
motor (bad work injectors). The reduction of flash point is
due to the penetration of fuel.

240

PUCH1

250

Picture 7. The change of content Cu [10]

IK104P2
5000

3. CONCLUSION

Kilometrage, km

The interpretation of used oils analysis is very complex,


because the individual analyses are interdependent. That
is the reason why it is necessary to know the entire oil
analysis, and not bring conclusions based on individual
analysis results. It is also necessary to establish both
normal and critical quality levels for specific oils in given
engines and under specific application conditions.

Picture 5. The change of flash point [10]


Analysis of the contents of different metals that are in the
lubricant is very important. Metal particles are abrasive,
and act as catalysts in the oxidation of oils. In motor oils,
the origin of the elements may be from the additives, the
wear, the fuel, air and liquid for cooling. Metals from the
additives can be Zn, Ca, Ba, or Mg and that indicates the
change of additives. Metals originating from wear are: Fe,
Pb, Cu, Cr, Al, Mn, Ag, Sn, and they point to the
increased wear in these systems. Elements originating
from the liquid for cooling are Na and B, and their
increased content indicates the penetration of cooling
liquid in the lubricant. Increased content of Si or Ca,
which originate from the air, points to a malfunction of
the air filter.

The lubricant, being an inevitable factor in the


tribomechanical system of engine has apart from the
usual lubricating role, also an important role in detecting
the engine operation efficiency and condition. This is
achieved through a systematic monitoring of oil in
application and a permanent contact between the motor
oil manufacturer and user.
Analyses from used oil sample should always be
compared with previous samples and final conclusions
should be based on trend analysis and has two closely
related objectives: to obtain information on the lubricant
drain intervals and preventive maintenance of the
machine.

Iron and copper content (Pictures 6 and 7), as a product of


wear, in the oil charge to the end of exploitation testing
has a growing trend.
Content of iron is significantly above the allowable limits
209

OTEH2016

EXPERIMENTALINVESTIGATIONOFOILSINFOURSTROKEENGINES

Investigations it was realized that there is a change of


physical-chemical characteristics of oil for lubrication in
the engines vehicle. These changes are in direct
dependence on the state of all elements tribomechanical
engines system, and depending on their functional
characteristics.

[6] V. Macian, R. Payri, B. Tormos, L. Montoro:


Applying analytical ferrography as a technique to
detect failures in diesel engine fuel injection systems,
Wear, Vol. 260, No. 4-5, pp. 562566, 2006.
[7] R.I. Taylor, R.C. Coy: Improved fuel efficiency by
lubricant design: a review, Proceedings of the
Institution of Mechanical Engineers, Part J: Journal
of Engineering Tribology, Vol. 2014, No. 1, pp. 115, 2000.
[8] M. Piest, C.M. Taylor: Automobile engine tribologyapproaching the surface, Wear, Vol. 241, No. 2, pp.
193-203, 2000.
[9] S.A. Adnani, S.J. Hashemi, A. Shooshtari, M.M.
Attar:The Initial Estimate of the Useful Lifetime of
the Oil in Diesel Engines Using Oil Analysis,
Tribology in Industry, Vol. 35, No. 1, pp. 61-68,
2013.
[10] S. Peric: The development of a method of diagnosis
the condition from the aspect of physical-chemical
and tribological characteristics of lubricating oils of
vehicles, PhD thesis, Military Academy, Belgrade,
2009.
[11] S. Peri, B. Nedi: Monitoring oil for lubrication of
tribomechanical engine assemblies, Journal of the
Balkan tribological association, Vol. 20, No 4, pp.
646- 664, 2014.

References
[1] J. Denis: Lubricant properties analyses and testing,
Editions Tehniq, Paris, 1997.
[2] D. Grgi: On-line monitoring of oil quality and
conditioning in hydraulics and lubrications systems,
in: Proceedings of 10th SERBIATRIB '07,
Kragujevac, Serbia, pp. 305-309.
[3] I. Maui, P. Todorovi, A. Brkovi, U. Proso, M.
apan, B. Jeremi: Development Of Mobile Device
For Oil Analysis, Tribology in Industry, Vol. 32, No.
3, pp. 26-32, 2010.
[4] V. Macian, B. Tormos, P. Olmeda, L. Montoro:
Analytical approach to wear rate determination for
internal combustion engine condition monitoring
based on oil analysis, Tribology International, Vol.
36, No. 10, pp. 771776, 2003.
[5] L. Guan, X. L. Feng, G. Xiong, J. A. Xie: Application
of dielectric spectroscopy for engine lubricating oil
degradation monitoring, Sensors and Actuators A:
Physical, Vol. 168, No.1, pp. 2229, 2011.

210

OPTIMIZATION OF THE BOX SECTION OF THE SINGLE-GIRDER


BRIDGE CRANE BY GRG ALGORITHM ACCORDING TO DOMESTIC
STANDARDS AND EUROCODES
GORAN PAVLOVI
Lola Institut, Belgrade, goran.pavlovic@li.rs
VLADIMIR KVRGI
Lola Institut, Belgrade, vladimir.kvrgic@li.rs
STEFAN MITROVI
Lola Institut, Belgrade, stefan.mitrovic@li.rs
MILE SAVKOVI
Faculty of Mechanical and Civil Engineering in Kraljevo, Kraljevo, savkovic.m@mfkv.kg.ac.rs
NEBOJA ZDRAVKOVI
Faculty of Mechanical and Civil Engineering in Kraljevo, Kraljevo, zdravkovic.n@mfkv.kg.ac.rs

Abstract: The paper considers the problem of optimization of the box section of the main girder of the single-girder
bridge crane. Reduction of the area of the box cross section is set as the objective function. The algorithm of
generalized reduced gradient (GRG2 algorithm) was used as the methodology for determination of optimum
geometrical parameters of the box section. The criteria of permissible stresses, local stability of plates, lateral stability
of the girder, static deflection, dynamic stiffness, minimum plate thickness and production feasibility (distance between
the webs) were applied as the constraint functions. Verification of the used methodology was carried out through
numerical examples and the comparison with some existing solutions of cranes was made. The comparative
optimization results show changes of the box section optimum geometric values due to domestic standards or
eurocodes.
Keywords: single-girder bridge crane,box section, optimization, GRG2 algorithm, eurocodes.

In most cases, the optimization of the girders is performed


by FEA. In paper [02] design optimization is performed
using principal calculations and detailed 3D FEA by
changing primarily thickness of the box girder plates.
Principal static and d ynamic calculations for two models
of double box girder, are given in the paper. As a result of
optimization a reduction of mass of 38% achieved and
stress-deformation characteristics considering yield
stranght and stability of construction was not endangered.
In paper [03] the main aim is to reduce the structural mass
of a real-world double-girder overhead crane, through the
use of modern computer modelling and simulation
methods and applications by CATIA software. The
structural mass reduction are designed and verified by
structural static stress simulations. The numerical analysis
of stress and strain field for double beam bridge crane is
done under maximum load conditions by using ABAQUS
software in [04], the distribution and largest area under
load of bridge is obtained. At the same time three natural
frequencies and mode shapes of double beam bridge crane
is analyzed, which the results provide a theoretical basis
and reference for double beam bridge crane designer.
Similarly, in paper [05] FEA of main girder of bridge

1. INTRODUCTION
Single-girder bridge cranes are widely applied in
industrial plants. The box section of the main girder is
most often used for medium and high carrying capacities
of these cranes. The mass of this girder has the largest
share in the total mass of the single-girder bridge crane,
and that is the reason why it is very important to reduce it
in order to obtain a lighter structure, which also reduces
the market price of the crane.
Most papers treat the problem of optimization or stress
analysis of double-girder bridge cranes. It is known that
double-girder cranes are intended for lifting and
transportation of large loads and for larger spans than
single-girder cranes. However, the number of singlegirder cranes installed in plants is significant so that
optimization of their main girders is justified. In some
cases it is more economical to use the single-girder bridge
crane in relation to the double-girder bridge crane from
reduction of mass of the girders point of view, [1]. This is
good reason to be pay more attention for these types of
cranes.

211

OTEH2016

OPTIMIZATIONOFTHEBOXSECTIONOFTHESINGLEGIRDERBRIDGECRANEBYGRGALGORITHMACCORDING

crane is conducted by ANSYS software. The natural


frequencies and the vibration-made vectors of the first 6
orders are obtained, and the vibration hazardous areas are
found. Rational optimization of girder is able to avoid the
resonance frequency region, which will help to improve
the reliability and life-span of the girder.

2. MATEMATHICAL FORMULATION OF
THE OPTIMIZATION PROBLEM
As the optimization task represents mass minimization, it
is necessatry to determine the values of geometrical
parameters of the cross-section of the girder which define
its minimum area.

In addition to the application FEM, is increasingly being


applied various analytical and numerical methods for
optimization process. In paper [06] the optimum design of
box-type of crane girders is considered by using nonlinear
programming techniques. The limitations on the stresses
and the deflections induced in the girder in different load
conditions are stated in the form of inequality constraints.
The effects of collision, wind loading and damping time
of vibration are also considered in the problem
formulation. The resulting nonlinear programming
problem is solved by using an interior penalty function
method. Similarly, in paper [07], by using the same
method, the mass of the girder was reduced up to 17,11%
for medium rank, and 11,56% for heavy rank. In paper
[08] was made parametric synthesis of the span of beams
bridge cranes on the basis of multivariate analysis of their
geometrical parameters and combining design and test
calculations. Developed the methodical and software for
calculation of metal structures of bridge cranes with
optimal weight and size characteristics. In paper [09] the
method of Lagrange multipliers was used as the
methodology for approximate determination of optimum
dependences of geometrical parameters of the box
section. The criteria of lateral stability and local stabiity
of plates were applied as the contraint functions. The
obtained results of optimization of geometrical parameters
were verified on numerical examples. In order to solve the
optimization of the bridge crane box girder, on the basis
of the polyclonal selection algorithm, an improved
polyclonal selection algorithm based on negative selection
is proposed in paper [10]. By taking advantages of the
clone deletion and supply operation based on negative
selection, the optimization ability of the proposed
algorithm is improved. The experiment results of the
typical function test and the crane box girder optimization
verify the effectiveness of this algorithm.

Minimization of the mass corresponds to minimization of


the volume, i.e. the area of the cross section of the girder,
where the given boundary conditions must be satisfied.
The area of the cross section primarily depends on: height
and width of the girder and thickness of plates.
The optimization problem defined in this way can be
given the following general mathematical formulation:
minimize f ( X )
subject to: gi ( X ) 0, i = 1,..., m
and li X i ui , i = 1,..., n
where:
f ( x ) - the objective function,
gi ( X ) 0 - the constraint function,
m - the number of constraints,
li , ui - lower, i.e. upper boundary, where ui > li .
X = { x1 ,..., xn } - represents the design vector made of
T

n design variables.

3. OBJECTIVE FUNCTION
The model is presented in Picture 1.:
y
b
x1
2

x1
x

Having in mind that there are a large number of singlegirder bridge cranes in plants, this paper deals with the
investigation into optimization of the box cross-section of
single-girder bridge cranes. As it can be seen in the
mentioned papers, there are different constraint functions
so that it can be concluded that a better objective function,
i.e. smaller girder mass is obtained for a larger number of
constraints.

y1

t2

yc

t2

yB = y C

The mentioned papers point to the importance of


optimization of the main girder of the crane and creation
of the model which can allow a more real description of
the crane behaviour in operation.

y2

3xt2

x2

yA

t1

b1
P

t3

A
C

xA

xC

Having in mind previously stated results and conclusions,


the objective of this paper is to define optimum values for
box section geometric parameters of single-girder bridge
crane, which would lead to mass reduction.

xB
b2

Picture 1. The box section of the single-girder


bridge crane
212

OPTIMIZATIONOFTHEBOXSECTIONOFTHESINGLEGIRDERBRIDGECRANEBYGRGALGORITHMACCORDING

The highest stresses occur at the middle of the span. The


values of the bending moments in the corresponding
planes are:

The objective function is represented by the area of the


cross-section of the box girder.
The vector of the given parameters is:
G
x = ( Q, L, M cv , M ch , ka , mk , d ,...)

M VI = R ( L e1 ) 2 / 4 L + q L2 / 8

(8)

M HI = Rh ( L e1 ) 2 / 4 L + q L2 ka / 8

(9)

(1)

where:
Q the carrying capacity of the crane,

where:

L the span of the crane,

= 1, 05 - the coefficient which depends on the


classification class (1,05 for classification class 2), [15],

mk the mass of the trolley,


d the distance between the wheels of the trolley,

= 1,15 - the dynamic coefficient of the influence of


load oscillation in the vertical plane, [11],

Mcv and Mch the bending moments in the vertical and


horizontal planes,

q specifically weight per unit of length of the girder,

ka = 0,1 - the dynamic coefficient of crane load in the


horizontal plane, [11].

e1 = F2 d / R ,

while the values of the corresponding forces are:

The area of the cross section (2), i.e. the objective


function, is:
A = 2 h t2 + b t1 + b2 t3

F1 = F2 = ( Q + mk ) / 2 g , R = F1 + F2

(10)

F1, st = F2, st = (Q + mk ) / 2 g = Fst

(11)

F1, h = F2, h = Fst ka , Rh = F1, h + F2, h

(12)

Ft = R ( L e1 ) / 2 L + q L / 2

(13)

P = R / nk

(14)

(2)

The geometrical properties in the specific points of box


section (Picture 1) shall be determined by well-known
expressions (Wx1, Wy1, Wx2, Wy2, Sx2, Ix, WxB, WyB, WxA,
WyA, SxA, WxC, WyC).

4. CONSTRAINT FUNCTIONS
4.1. The criterion of of strength

where:

Strength check is conducted in specific points of girder.


Actual stress r has to be lower than critical stress k
which depends on load case.

r k

(3)

k = k 1 = f y / 1

(4)

k = k 2 = f y / 2

(5)

P - the vertical crane wheel load,

nk - number of the wheels of trolley.

1. The strength in specific points of the girder:


Point 1:
Maximum normal stress at point 1:

1,u = M VI / Wx1 + M HI / Wy1 k

according to domestic standards, or:

k = f y / m /1

OTEH2016

(15)

Point 2:
(6)

Maximum normal stress at point 2:

z 2 = M VI / Wx 2 + M HI / Wy 2

according to eurocodes,
where:

(16)

Maximum tangential stress at point 2:

f y the minimum yield stress of the plate material,

= Ft S x 2 / (2 t2 I x ) k / 3

1 the factored load coefficient for load case 1,

Maximum equivalent stress at point 2:

2 the factored load coefficient for load case 2,


m = 1,1 general resistance factor, [14].

2,u = z 2 2 + 3 2 k

The constraint function has the following form:


g1 = r k 0

(17)

(18)

2. The strength in the bottom flange of the girder in


specific points:

(7)

213

OPTIMIZATIONOFTHEBOXSECTIONOFTHESINGLEGIRDERBRIDGECRANEBYGRGALGORITHMACCORDING

Point B:

OTEH2016

A,u = zA2 + oyA , Ed 2 zA oyA , Ed + 3 A 2 k (28)

I According to domestic standards:

Point C:

Maximum equivalent stress at point B:

B ,u = M VI / WxB + M HI / WyB k1

I According to domestic standards:

(19)

The normal stresses due to the local pressure of the trolley


wheel at point C, [15]:

II According to eurocodes:
The normal stresses due to the local pressure of the trolley
wheel at point B, [13]:

xP,C = K Cx P / t32 k1 , zP,C = K Cz P / t32 k1 (29)

oxB , Ed = cxB P / t32 k , oyB , Ed = c yB P / t32 k (20)

K Cx , K Cz - the corresponding coefficients for calculating


stresses at point C, [15].

cxB , c yB = 0

the

corresponding

coefficients

for

Maximum normal stress in the direction of the z axis at


point C:

calculating stresses at point B, [13].

zC = K Cz P / t32 + M VI / WxC + M HI / WyC

Maximum equivalent stress at point B:

B ,u = M VI / WxB + M HI / WyB + cxB P / t32 k

(21)

(30)

Maximum equivalent normal stress at point C:

Point A:

C ,u = zC 2 + xP,C 2 zC xP,C k 2

(31)

I According to domestic standards:

II According to eurocodes:

The normal stresses due to the local pressure of the trolley


wheel at point A, [15]:

P
x, A

= K Ax P / t k1 ,
2
3

P
z, A

The normal stresses due to the local pressure of the trolley


wheel at point C, [13]:

= K Az P / t k1 (22)
2
3

oxC , Ed = cxC P / t32 k , oyC , Ed = cCy P / t32 k (32)

K Ax , K Az - the corresponding coefficients for calculating


stresses at point A, [15].

cxC , cCy - the corresponding coefficients for calculating

stresses at point C, [13].

Maximum normal stress in the direction of the z axis at


point A:

zA = K Az P / t + M VI / WxA + M HI / WyA
2
3

Maximum normal stress in the direction of the z axis at


point C:

(23)

zC = cxC P / t32 + M VI / WxC + M HI / WyC

Maximum tangential stress at point A:

A = Ft S xA /(2 t2 I x ) k1 / 3

Maximum equivalent normal stress at point taki C:

(24)

C ,u = zC 2 + oyC , Ed 2 zC oyC , Ed k

Maximum equivalent stress at point A:

A,u = zA2 + xP, A2 zA xP, A + 3 A2 k1

(33)

(25)

(34)

4.2. The criterion of local stability of plates

II According to eurocodes:

1. Local stability according to domestic standards:

The normal stresses due to the local pressure of the trolley


wheel at point A, [13]:

Local stability check is done in the same manner for both


the flange and web plates. The paper considers the case
with one row of horizontal stiffeners, along with vertical
stiffeners being placed at the distance 2h (Picture 2).

oxA , Ed = cxA P / t32 k , oyA , Ed = c yA P / t32 k (26)


cxA , c yA - the corresponding coefficients for calculating

stresses at point A, [13].


Maximum normal stress in the direction of the z axis at
point A:

zA = cxA P / t32 + M VI / WxA + M HI / WyA

(27)
Picture 2. Stiffeners of the box girder

Maximum equivalent normal stress at point A:


214

OTEH2016

OPTIMIZATIONOFTHEBOXSECTIONOFTHESINGLEGIRDERBRIDGECRANEBYGRGALGORITHMACCORDING

To satisfy the stability condition it must be:

4.4. The criterion of girder stiffness

max ux v

(35)

In order to satisfy this criterion, it is necessary that the


deflection in vertical plane have the value smaller than the
permissible one:

where:

max - maximum pressure stress,

f = F1, st L3 1 + (1 6 2 ) / 48 E I x f dop (45)

ux - buckling stress limit, [12],

where:

v - yield stress, [12].

F1,st - force acting upon girder beneath the trolley wheel,

The constraint function has the form:

f dop - permissible deflection in vertical plane,

g 2 = max min( ux , v ) 0

(36)

= 1, =d/L, [15].
The constraint function has the form:

Maximum pressure stress of the top flange:

p = 1 ( M VI / Wx1 + M HI / Wy 2 )

g 4 = f f dop 0

(37)

4.5. The criterion of dynamic stiffness

Maximum pressure stresses for the webs in zone 1 and


zone 2, respectively:

1 = 1 ( M VI / Wx 2 + M HI / Wy 2 )
M VI y1 t1 h1 M HI

+
y1 t1
Wy 2
Wx 2

2 = 1

To determine dynamic stiffness, it is necessary to analyse


the vertical oscillation of main girder with the payload
(Picture 3).

(38)

1,0

(39)

11

2. Local stability according to eurocodes:

m1

The criterion of local stability of the top flange plate is:

p x fy / m

(46)

Picture 3. The model of oscillation of the main girder


with concentrated mass

(40)

The mass m1 is determined according to the expression,


[15]:

where:

x - a reduction factor, [14].

m1 = Q + mk + 0, 486 mm

The criterion of local stability of the web in zone 1 and


zone 2, respectively:

(47)

where:

1 x1 f y / m

(41)

mm - the mass of the girder.

2 x2 f y / m

(42)

The time of damping of oscillation is determined from the


expression, [15]:
T = 3 / d Td

4.3. The criterion of lateral stability


where:

Safety check for lateral stability of the girder is done in


compliance with [12]. So, it has to be fulfilled:

zV 1 = M VI / Wx1 k 3 = f y / I

(48)

= 2 11 m1 - the period of oscillation,


11 - the deflection of the girder caused by the action of

(43)

the unit force,


where:

Td - the permissible time of damping of oscillation, [15],

- the buckling coefficient, [12].

d - the logarithmic decrement which shows the rate of


damping of oscillation, [15].

Using the above mentioned relations, the constraint


function reads:
g3 = M VI / Wx1 f y / I 0

The constraint function for the criterion of dynamic


stiffness is:

(44)

g5 = T Td 0

215

(49)

OPTIMIZATIONOFTHEBOXSECTIONOFTHESINGLEGIRDERBRIDGECRANEBYGRGALGORITHMACCORDING

The justification of application GRG2 algorithm was


checked on five solutions of single-girder bridge cranes
which are in operation. It can be observed that greater
area of box girder cross section are obtained according to
eurocodes in comparison with those which are calculated
due to domestic standarads.

5. NUMERICAL PRESENTATION OF
OBTAINED RESULTS
The optimization is done by generalized gradient method
(GRG2 algorithm) and using Solver tool from Analysis
module of EXCEL software.
Variable parameters are section height and width and
plates thicknesses, (50). All constraint functions stated in
previous chapters are taken into analysis.

( h,b,b1 ,b2 ,t1 ,t2 ,t3 )

OTEH2016

The conclusion is that further research should be directed


toward a multicriteria analysis where it is necessary to
include additional constraint functions, such as: material
fatigue, influence of manufacturing technology,
optimization of the ratio of plate thicknesses, types of
material, conditions of crane operation and economy.

(50)

The case with one row of longitudinal stiffeners is


analysed. Also, the longitudinal stiffeners are adopted to
be placed at the distance of h/4.

References
[1] Babin,N., Georgijevi,M., ostakov,R. Izbor tipa
mostovske dizalice, sanduaste konstrikcije u funkciji
kriterijuma minimalne teine nosaa, SMEITS,
Beograd, 1979.
[2] Beirovi,A.,
Vukojevi,D.,
Hadikaduni,F.:
Optimization of double box girder overhead crane in
function of cross section parameter of main girders,
15th International Research/Expert Conference
Trends in the Development of Machinery and
Associated Technology TMT 2011, 12-18
September 2011, Prague, Czech Republic.
[3] Sankar,A., Vijayan,V., Ashraf,I.: Reducing the
structural mass of a real-world double girder
overhead crane, International Journal of Advances in
Engineering & Technology IJAET, Vol. 8, Issue 2,
pp. 150-162, april 2015.
[4] Yu,Q., Mao,X.: The Performance Analysis of Double
Beam Bridge Crane Based on Computer Simulation
Technology, Key Engineering Materials, Vol. 584, pp.
107-111, Trans Tech Publications, Switzerland, 2014.
[5] Peng,R., Qin,X.: Modal analysis of crane girder
based on ANSYS Workbench, Advanced Materials
Research, Vol. 951, pp. 58-61, Trans Tech
Publications, Switzerland, 2014.
[6] Rao,S.S.: Optimum Design of Bridge Girders for
Electric Overhead Traveling Cranes, Journal of
Engineering for Industry, Vol. 100, pp. 375-382,
august 1978.
[7] Tian,G., Zhang,S., Sun,S.: The Optimization Design of
Overhead Traveling Cranes Box Girder, Advanced
Materials Research, Vols. 538-541, pp. 2850-2855,
Trans Tech Publications, Switzerland, 2012.
[8] Anzev,V.Y.,
Tolokonnikov,A.S.,
Potapov,S.A.,
Kalabin,P.Y.: Improvement of the method of
calculation of span of beams load lifting machines of
bridge crane type, -
, ,
, 2013. . 7, .1, pp. 144153, 2013.
[9] Savkovi,M., Gai,M., Pavlovi,G., Bulatovi,R.,
Zdravkovi,N.: Optimization of the box section of the
main girder of the bridge crane according to the
criteria of lateral and local stability of plates, The
7th International Symposium KOD, p.p. 113-120,
Balatonfred, Hungary, 2012.

Minimum thickness of the web plates is adopted to be 5


mm and minimum thickness of the bottom and top flange
is adopted to be 6 mm, which are also the constraint
functions. In addition, as one more constraint function,
minimum distance between the web plates is taken to be
25 cm.
Table 1. Opimization results according to domestic
standards
Saving
Q (t) L (m) A1 (cm2) A2 (cm2)
(%)
Amiga 8t
8
15,1
208,8 151,8
27,30
Amiga 5t
5
15,2
177,2 138,3
21,95
Radijator
10
16,8
248,8 161,8
34,97
10t
JEEP
16
15
237,6 192,87
18,83
16t
Lola 5t
5
16,5
256,6 172,8
32,66

Table 1 and Table 2 show the results of the optimization


for five solutions of single-girder bridge cranes, according
to domestic standards and eurocodes, respectively. A1 and
A2 are the values of the area of the box cross-section
before and after optimization, respectively.
Table 2. Opimization results according to eurocodes
Saving
Q (t) L (m) A (cm2) A (cm2)
(%)
Amiga 8t
8
15,1
208,8 180,24 13,68
Amiga 5t
5
15,2
177,2 159,3
10,10
Radijator
10
16,8
248,8 182,95 26,47
10t
JEEP
16
15
237,6 231,15
2,71
16t
Lola 5t
5
16,5
256,6 180,24 13,68

6. CONCLUSION
The paper presented optimum dimensions of box section
of single-girder bridge crane by GRG2 optimization
method. The criteria of permissible stresses, local stability
of plates, lateral stability of the girder, static deflection,
dynamic stiffness, production feasibility (distance
between the webs) and minimum thicknes of the plates
were applied as the constraint functions. The objective
function was minimum cross-sectional area, whereby
given constraint conditions were satisfied.
216

OPTIMIZATIONOFTHEBOXSECTIONOFTHESINGLEGIRDERBRIDGECRANEBYGRGALGORITHMACCORDING

OTEH2016

standardizaciju, Beograd, 1986.


[13] EN 1993-6, Eurocode 3 - Design of steel structures Part 6: Crane supporting structures, 2007.
[14] prEN 13001-3-1, Cranes - General Design - Part 3-1:
Limit States and proof competence of steel structure,
2010.
[15] Ostri,D., Toi,S.: Dizalice, Institut za mehanizaciju
Mainskog fakulteta Univerziteta u Beogradu, 2005.

[10] Chen,C., Shen,Y., Yuan,M.: Optimization Design of


Crane Box Girder Based on the Improved Polyclonal
Selection Algorithm, IOSR Journal of Mechanical
and Civil Engineering (IOSR-JMCE), Volume 11,
Issue 4 Ver. II (Jul- Aug. 2014), pp 120-125, 2014.
[11] SRPS M.D1.050, Standardi za dizalice, Jugoslovenski
zavod za standardizaciju, Beograd, 1968.
[12] SRPS U.E7, Standardi za proraun noseih elinih
konstrukcija,
Jugoslovenski
zavod
za

217

MATHEMATICAL MODELING DYNAMIC PERFORMANCE OF


ARTILLERY FIRE SUPPORT IN THE OFFENSIVE OPERATION
DAMIR PROJOVI
Military Academy, Belgrade, damirpro@yahoo.com
ZORAN KARAVIDI
Military Academy, Belgrade, zkaravidic@gmail.com
MIROSLAV OSTOJI
Military Academy, Belgrade

Abstract: This paper presents a mathematical model of a artillery fire support as a tool for the analysis course of action
in the decision-making process. Defined as mathematical dependence of the input model with indicators of the situation
in a certain moment of combat. The model is based on indicators predict the state of the artillery unit during combat.
Results of the predictions provide more efficient evaluation course of action and faster transition to the next phase of
the decision-making process.
Keywords: Mathematical modeling, the dynamics of combat operations, artillery fire support, offensive operation.
Situation models treat some operational problems of the
general model. In current practice, it is known as
operational or tactical mission.

1. INTRODUCTION
Model is often defined as a simplified picture of reality,
and the analogy between the model and the real system is
the basic assumption underlying the method of modeling.
Given that the real systems are extremely complex,
simplifying is required in the modeling process. This
especially applies to organizational systems, such as
military organizations, and even more so when that
organization system performs combat.

According to the generality criteria, considered model


rocket artillery fire support is special. According to the
completeness criteria it is partial. Relationships within the
model, between the input and output variables are defined
and mathematically supported.

2. ROCKET ARTILLERY FIRE SUPPORT


IN THE OFFENSIVE OPERATION

Besides in combat there are a number of phenomena that


are not repeated in the form in which they were before
happened, and are very difficult to predict. From this
point it is possible to make models of differentiation and
performance dependent and independent variables and
analyze it or to experiment. Experimenting on real
systems, when it comes to combat is almost impossible.

Artillery support has a great tactical mobility and constant


as soon as possible readiness for action on the request. In
addition, Artillery support is completely automated in
area of preparation and firing procedures. It is able to
simultaneously, accurately and efficiently act on a very
large number of targets. It is permanent included in digital
environment, updating the combat situations in real time,
so that after carrying out a combat order, immediately
ready to execute a new task.

Models of war games could be with different level of


generality and can be divided into: general, special and
singly. The most general are those that can be applied to a
number of different situations. In contrast to the general,
the special models are made for solving a part of the
general problem. Individual models treat a problem from
the framework of a particular model.

Artillery is grouped and developed to engage on main


direction of operations. The reason for it is a large-scale
and more efficient participation in the decisive stages of
the operation. Grouping artillery units includes deliver
artillery and missile units fire support artillery into
appropriate groups. Artillery unit of the fire support
forming artillery and rocket groups.

According to the criteria of the completeness, models


could be: complex, the partial and situational. Complex
model contains a generic model, one or more of the
partial model and the one or more situational model.
Complex models display the operation as a war
phenomenon in a particular space or time from the
beginning to the end, including preparations. Partial
models are related to some content or security of
operations. It is known in the practice as ''Appendix''.

Artillery fire support is action of artillery fire at the


enemy in support of its own forces in combat operations.
It realizes with various types of artillery fire and missile
strikes to destroy and neutralize combat units and
opponents resources.
218

MATHEMATICALMODELINGDYNAMICPERFORMANCEOFARTILLERYFIRESUPPORTINTHEOFFENSIVEOPERATION

Artillery fire support in the offensive operation is divided


into two phases:
preparation of artillery attacks,
artillery support of attack. [1]

3. MODEL ARTILLERY FIRE SUPPORT

The mathematical model artillery support offensive


operations are determined by two groups of conditions
that more closely define the actual conditions of current
operations. They are related to the preparation and the
execution phase. Considering the artillery firing,
according to phases of the operation, there are two phases
of artillery firing. The preparation phase of operation
corresponding to the preparation of initial elements for
firing stage while execute the operation corresponding to
the execution phase of group shooting.

OTEH2016

Errors in determining the target position (Exc, Eyc),


Errors in topographic and surveying preparation
(Extgr, Eytgr),
Errors determining ballistic corrections (Exb),
Errors in determining meteorological corrections
(Exm, Eym),
Errors in technical preparation (Extp, Eytp),
Errors in firing tables (Extg, Eytg),
Errors of rounding elements (Exc, Eyc),
Errors of data processing (Exop, Eyop).

The first group of conditions defining the technical


characteristics of the unit or the technical characteristics
of the tools and resources that are used to prepare initial
elements for a group shooting.
The second group relates to the definition of situational
parameters in the operation, which is operationalized and
quantified with the determination of their mutual
correlation.

Picture 1. Summary error of initial elements of complete


preparation for firing
Summary error of complete preparation of initial elements
for firing is:
According to the distance: (1)

3.1. Model preparation artillery fire support


The phase of determining the initial elements is the most
complex functions of artillery firing, because it tested all
restrictions on which depends whether the shooting is not
possible or it is not in the chosen sector with defined
restricted areas, reefs shelter, with selected set of
meteorological and ballistic data and the specific
relationship firing position and target.

2
2
2
2
2
2
2
Ex = Exc
+ E xtgr
+ Exb
+ Exm
+ E xtp
+ Extg
+ Exz2 + Exop
(1)

According to the direction: (2)

2
2
2
2
2
2
E y = E yc
+ E ytgr
+ E ym
+ E ytp
+ E ytg
+ E 2yz + E yop

The application of initial elements for the shooting


demands: the exact coordinates of the units and targets;
accurate data on meteorological and ballistic conditions of
firing; oriented instruments and artillery pieces directed to
the main direction according to the instructions for
topographic and geodetic support of artillery; possession
of appropriate firing tables depending on the altitude.
These characteristics are defined through the probable
errors. Their combined effect is expressed as the summary
middle value of probable error of initial elements
preparation for firing. It will later define probability of
hitting a certain area or destination. [2]

(2)

For further consideration it is important to answer the


question, what is the size of possible target area after the
completion of initial elements preparation. Or determine
the size of land, at the distance and direction, which the
probability is 100% of hitting first projectile fired with
elements of complete initial elements preparation.[3]
Theoretically speaking, errors can be extends from minus
infinity to plus infinity. It has been proved that
satisfactory accuracy is meet if errors are smaller in
absolute value than 4 probably errors in both direction.
This means that the probability of hitting target at the
distance and direction, after the completion of initial
elements preparation, ranges from -4 PE to + 4 PE,
picture 2.

Complete preparation initial elements of indirect firing


task is to determine, as accurately as possible, the actual
conditions for fire, comparison with the tables
conditions, calculating corrections and eventually
determine the best initial elements for firing. The actual
firing conditions are variable for each firing. Normal
firing conditions are a constant value for all firing and
include conditions for which firing tables is calculated.

Area possible target position after the complete


preparation of the initial elements, by distance and
direction, and the probability that the target is at the some
point of that area is represented by ordinates of the
equation possible position of the target.

Complete preparation indirect firing initial elements


consists of the following independent group sources of
errors:

219

MATHEMATICALMODELINGDYNAMICPERFORMANCEOFARTILLERYFIRESUPPORTINTHEOFFENSIVEOPERATION

OTEH2016

tables for a certain distance. Normal law that determines


the probability of finding the target at the distance and
direction, as follows:
E +a
E a

p = 1 x
x

4 Vd
Vd
E y +b
Ey b


V p
Vp

(3)

Determination of fire regimes


In the approach of a mathematical model forming it is
shall be determined the fire regime as an average rate of
fire as a parameter that defines the artillery unit at all. Fire
regime data are known and can be found in the technical
instruction or in firing tables.
Determining the size of the single-shot destroyed surface
of the target

Picture 2. Possible position of the target area


The probability of hits is a complex event that is equally
probability by the distance multiply probability by
direction.

Single-shot destroyed target surface with appropriate


weapons is equal to surface of successful action of
fragmentation projectile. The effect of a single-shot firing
of all weapons artillery units obtained by multiplying the
size 1 and 2 and the total number of artillery weapons.
This size is important for determining the initial firepower
of artillery unit.

3.2. Model dynamics performance artillery fire


support
To form the mathematical model it is necessary to
determine the derived parameters that quantitatively
express the initial firepower opposing forces in combat. It
is a fundamental part of this paper that is dedicated to the
methodology of determining these parameters for a
specifically problem at dynamics of opposed artillery
combat. The task consists in determining the
mathematical expectation unruined number of combat
units of the two groups at any point of time.

The starting dependence


Based on the defined baseline data term can be imported:
Starting firepower first group
U1 =

1 p1 1 N1
S2

(4)

Starting firepower of the second group

The mathematical model based on the setting makes the


initial determination of firepower, taking into
consideration the following parameters:
N1 and N2, the total number of weapons translated
into the basic caliber
p1 and p2, hit probability
1 and 2, rate of fire
1 and 2, single-shot destroyed surface
S1 and S2, surface on which are deployed two artillery
gropes
Vm1 and Vm2, fire capability of both groups

U2 =

2 p2 2 N 2
S1

(5)

At the start of combat t initially firepower be value U1


t and U2t
Determination fire capability of artillery units
Operation is planned and in plane exists table of targets
which will be shot. Based on the above it counts the total
expenditure of the projectiles:
Pb k = in=1 (U p )i

For each of these parameters will be present the procedure


for calculating, according to the basic concept of
mathematical model.

(6)

It shall be put the total consumption of projectiles in the


equation, where we use the concept of number approved
combat kit of ammunition operational model.[4] Based on
it we get the possibility of fire :

Determination the probability of target hitting


The repairs of the first missile in the group firing after the
initial elements complete preparation for shooting are
summary probably errors Ex and Ey. Repairs are
calculated from center of target. Value of probably error
of distance Vd and direction Vp are random errors that are
under the normal law of errors and taken from the firing

Vm =

Ob / k
Pb / k

(7)

On firepower, during the operation, proportionally affects


the possibilities of fire respectively the number of
220

OTEH2016

MATHEMATICALMODELINGDYNAMICPERFORMANCEOFARTILLERYFIRESUPPORTINTHEOFFENSIVEOPERATION

undestroyed number of artillery pieces which uses the


following terms:

approved combat kit of ammunition:


U1 =

N11 p1 1 Vm
S2

(8)

MO1 ( t ) = V1 ( t ) N1

(17)

Defined firepower relates situational parameters specified


operating model with the mathematical model of the
dynamic combat operations artillery units.

MO2 ( t ) = V2 ( t ) N 2

(18)

where is:

Furthermore, at any point of time firepower is


proportional to surface (S1 and S2) as the percentage of
undestroyed areas.

Values

V1 ( t )

and

V1 ( t ) =

S1 ( t )
S1

(9)

V2 ( t ) =

S2 ( t )
S2

(10)

V2 ( t )

accurately

defines

MO1 - mathematical expectation number of undestroyed


weapons in the first group at the time (t)
MO2 - mathematical expectation number of undestroyed
weapons in the second group at the time (t)

4. IMPLEMENTATION OF THE MODEL


IN THE OFFENSIVE OPERATION
To illustrate the model to get solutions, it takes the
example offensive operation [4]. In the model of
offensive operation, division LRSV 128mm M77 formed
first artillery group (AG1), until the enemy self-propelled
122mm howitzer battery 2s1 formed second artillery
group (AG2).

the

development dynamics of the combat between the two


sides, because it allows to determine the mean value
undestroyed units of the first and second group involved
in the combat.

For offensive operation in table 1 provides basic


information defining the problem. The main characteristic
of the problem is that the forces both of sides deployed in
defined areas, and the losses in the battle were
proportionate to the percentage of area neutralization.

It is obvious that the value of V1 ( t ) and V2 ( t ) decrease


in the course of time, so the increment is:
for the first group
V1 ( t ) = U 2V2V1t

(11)

Table 1. Situation parameters


No.
Input
1.
Target range
The probability of
2.
hitting
3.
Rate of fire
4.
Number of weapons
Single-shot destroyed
5.
surface
The surface of enemy
6.
deployed artillery unit

for the second group


V2 ( t ) = U1V1V2 t

(12)

From the previous term, when allowed to t0 obtained


Diners differential equations:
V1
= U 2V2V1
dt
V2
= U1V1V2
dt

(13)

(14)

Group 1
12106m

Group 2
11550m

0.16087

0.195

5 min
18

-1

5 min-1
6

0.1 ha

0.1 ha

3 ha

9 ha

Starting firepower of units is calculated on the basis of


initial data from table 1. Based on the above analytical
expressions obtained numerical values are: U1 = 0.48261
and U2 = 0.065107. Based on the data from table 1, values
U1 and U2 follows values shown in Table 2.

The initial conditions of Dinners equations are defined


for t = 0 where: V1 (0) = 1; V2 (0) =1. Substituting values
for the second differential equation obtained terms:

Based on the obtained model, it could be calculated the


maths expected value of the percentage undestroyed area
for all the time duration of the combat.

Table 2. Results of counting


Time of artillery
V(t)a
combat [min]
1
0.95
2
0.92
3
0.90
4
0.89
5
0.88
6
0.87
7
0.87
8
0.87
9
0.87

Based on the calculated values for size V1, V2, it is


possible to determine the mathematical expectation

From the data in Table 2. it could be conclude the


following:

V1 = eU1t

U1 U 2
U1eU1t U 2 eU 2t

(15)

V2 = eU 2t

U1 U 2
U1t
U1e U 2 eU 2t

(16)

which constitute simultaneously an analytical model of


artillery and counter artillery combat. [5]

221

V(t)b

MOa(t)

MOb(t)

0.63
0.40
0.26
0.17
0.11
0.07
0.05
0.03
0.02

17.09
16.54
16.20
15.98
15.84
15.75
15.69
15.65
15.62

3.75
2.39
1.54
1.00
0.65
0.43
0.28
0.18
0.12

MATHEMATICALMODELINGDYNAMICPERFORMANCEOFARTILLERYFIRESUPPORTINTHEOFFENSIVEOPERATION

Percentage of undestroyed surface is going to change in


a function of time. For example: for the first artillery
group after 2 minutes of combat this percentage is 92%
and for the second artillery group value is 40%.
The mathematical expectation undestroyed number of
weapons differs analog to upper conclusion.

OTEH2016

increase the total number of weapons involved in


combat.

Using the Diners equations in the preparation of combat


operations, headquarters of artillery groups can
quantitatively determine the balance of power and to
predict the artillery combat results. Adjusting certain
parameters can assure success in their own unit.

5. CONCLUSION

References

Based on the results obtained by applying the analytical


model in the offensive operation can be concluded that
such an approach in the analytical modeling provides
great opportunities in the simulation and research of real
problems of the use of artillery units in combat.

[1] Borbeno pravilo artiljerije, GVS KoV, 2013,


Beograd.
[2] ivanov,.: Teorija gaanja, udbenik za vojne kole
(smer artiljerije) i artiljerijske jedinice, SSNO, G
JNA, UA-216, VIZ, Beograd, 1979, str. 262.
[3] Kokelj,T.: Uticaj tanosti pripreme poetnih
elemenata posrednog gaanja na preciznost
artiljerijske vatre, magistarski rad, 2004, Vojna
akademija.
[4] Projovi,D.: Optimizacija artiljerijske vatrene
podrke u operacijama Vojske Srbije primenom
globalnog pozicionog sistema, 2016, PhD, Vojna
akademija, Beograd.
[5] Petri,J., Koji,Z.: Operaciona istraivanja, 1990,
Nauna knjiga Beograd.

The first part has general importance. In the second part,


these parameters are applied in specific conditions that
make it possible to describe the combat process with
Diners system of differential equations. Based on
equations, it performs the analytical model of combat
operations for specific conditions.
Based on the analysis of mathematical expectation
undestroyed number of weapons, it can be concluded that
the success of the mutual artillery combats it:
increasing the probability of hitting;
increasing the rate of fire (reaction time);
increasing the surface on which they placed their own
artillery units;

222

MODELING AND MULTIBODY SIMULATION OF LAND ROVER DEFENDER 110


RIDE AND HANDLING DYNAMICS
NABIL KHETTOU
Military Academy, University of Defense in Belgrade, Serbia, nabil.khettou@gmail.com
DRAGAN TRIFKOVI
Military Academy, University of Defense in Belgrade, Serbia, dragan.trifkovic@va.mod.gov.rs
SLAVKO MUDEKA
Military Academy, University of Defense in Belgrade, Serbia, slavko.muzdeka@gmail.com

Abstract: This paper deals with the implementation of a multibody simulation model of the off-road ground vehicle Land
Rover Defender 110 in order to simulate its handling and ride behavior. Laboratory measurements on the vehicle were
carried out in order to determine component and subsystem parameters. The required measurements are then used to build
the multibody model using the software MSC.ADAMS/Car. The ride and handling dynamics of the obtained 94-degrees of
freedom model are assessed for both the vehicle negotiating a discrete speed bump and performing a double lane change
manoeuver. The obtained results suggest that model responses are in accordance with the nature of the simulation tests.
However, filed measurement should be carried out to correlate the obtained simulation results.
Keywords: computer modelling, model validation, field testing, vehicle dynamic, vehicle conversion.
Thus, the simulation model should be kept as simple as
possible, but good enough to accurately represent the
dynamic behavior to be investigated [6]. According to the
author, ground vehicle dynamics can be classified into three
major components: vertical dynamics (vehicle response to
road disturbances), lateral dynamics (vehicle response to
driver steering), longitudinal dynamics (vehicle response to
engine throttle and braking). Depending on the application,
many scientific papers investigate one aspect of vehicle
dynamics. In [7], the author has suggested a method to
identify lateral tire forces using a simple vehicle model and
can be applied to the analysis of vehicle handling
performance. Pazooki [4] has developed a comprehensive
off-road vehicle model for ride analysis using a 3D tireterrain interaction model. The author has investigated both
suspended and unsuspended vehicle model responses arising
from road roughness profile. A high resolution computer
based simulation model was developed by Leatherwood [8]
which aims to emulate the ride and handling performance of
a ground military vehicle. Even though the author has tried
to establish high-resolution models of all major subsystems,
the assembled full vehicle model seems to describe more
accurately the ride dynamics of the actual physical vehicle
than the handling behavior.

1. INTRODUCTION
Simulation and computer modeling have transformed the
unidirectional process of vehicle design and development
into a next level strategy which allows engineers to
reproduce manoeuvers and tests on virtual models to assess
the dynamic behavior of vehicles at different design levels
(complete redesign, derivative design, variant design, model
update, etc.) [1]. Nevertheless, vehicle simulations are
intended to reduce the cost and the duration of the vehicle
development process and help identifying errors and
deficiencies at early stages of the design process. Ride
comfort and handling properties are one of the major key
features to be investigated using vehicle simulation models.
Ride comfort is expressed as the level of discomfort
experienced by the passenger in terms of frequency and
amplitudes of mostly vertical oscillations induced by road
geometry and engine vibration. Meanwhile, handling
properties are related to the response and the stability of the
vehicle to driver and environmental inputs such as gust,
wind and road disturbances [2,3]. In the field of vehicle
dynamics, these two features can be evaluated by
performing predefined transient and steady state
manoeuvers according to internationally accepted
procedures (ISO 3888-1, STANAG-4357ED). Generally,
ride comfort can be assessed performing straight line
driving maneuvers over standardized road obstacles and
geometries (potholes, planks, ramps, sinus waves, stochastic
uneven etc.) [4]. On the other hand, handling qualities can
be evaluated during standardized open-loop steering
maneuvers and cornering events (impulse steer, fish hook,
double lane change etc.) [5]. Literature suggests that the
requirements for vehicle simulation models should be in
accordance with the considered dynamic characteristics.

1.1. Background and motivation


The need for developing vehicle simulation model comes to
the fore especially when considering the possibility of
changing the use of an existing vehicle concept or installing
multi-purpose equipment. For small production series,
generally there are no financial means to develop a new
vehicle concept, yet developers seek the best available

223

MODELINGANDMULTIBODYSIMULATIONOFLANDROVERDEFENDER110RIDEANDHANDLINGDYNAMICS

platform whose dynamic behavior should be examined


before and after the reconstruction or equipment installation.
An example of this can be found in military vehicles which,
depending on the application, need to be improved,
armored, weaponized or equipped with tactical surveillance
and reconnaissance equipment.

3.
4.
5.

Modern threats and challenges require prompt reaction to


face the changing character of modern warfare and the use
of new technologies. Thus a respectful army should quickly
and efficiently equip its military units with appropriate
equipment. Here comes the advantage of developing
military mobile solutions by converting existing vehicles.
Furthermore, army can achieve significant savings in terms
of maintenance when using one platform for different
purposes. Through the last 20 years it has been noticed an
increasing need to outfit land forces with lightweight ground
vehicle for surveillance and reconnaissance of battlefield,
borders and military facilities. While leading military
industries develop new concepts of these vehicles, others
upgrade an existing vehicle by integrating the required
equipment. The most common equipment for the above
mentioned vehicles consists of combination of different
sensors, transducers and telecommunication devices such as
TV and thermal cameras, ground surveillance radars, laser
devices, acoustic sensors and radio stations, all integrated in
one block mounted on a telescopic mast. The analysis of the
integration possibility of this equipment on wheeled vehicle
should answer questions about arrangement, positioning,
additional mass, mechanical stress of vehicle structure,
dynamic stability, ride and maneuverability.

OTEH2016

assemble the full vehicle model;


Design appropriate manoeuvers to assess the full
vehicle response;
Perform maneuvers on the model;
Discuss the results and draw conclusions for further
researches.

All tests and measurements were carried out at the premises


of the sponsor institution with the use of the available
resources according to a predefined research plan.
Dimensions of characteristic components of the suspension
system were manually measured and then drawn using a
CAD software in order to determine their centers of gravity
as well as moments of inertia. The coordinates of the vehicle
CoG and the moments of inertia of the suspended mass have
been determined by Uys et al. [9] and are used in this work.
Suspension system topology was obtained by determining
the coordinates of the suspension hard points. Some
measurements were carried out in the laboratory to
determine spring stiffness, shock absorber characteristic, tire
radial stiffness, steering ratio and anti-roll bar torsional
stiffness. Based on the above mentioned, each major
subsystem (Front suspension, rear suspension, front wheels,
rear wheels, steering subsystem and chassis) was built in the
software package MSC.ADAMS/Car using the
corresponding template which contains information about
the topology of the subsystem and the way it communicates
with other subsystems. The assembled full vehicle model
was then adjusted so that the total weight, weight
distribution and the inertia of the chassis match up with the
actual physical model.

1.2. Research goal and approach

The behavior of the model was analyzed on the base of the


model responses performing straight line bump course at
different speeds to evaluate the vertical dynamics, and
double lane change course to evaluate vehicle stability and
handling performance.

The main objective of this paper is to contribute to the


preliminary analysis of the possibility to change the purpose
of an existing military wheeled vehicle by integrating
special equipment for border surveillance and
reconnaissance. While mounting this equipment on a
commercial vehicle is less expensive and time-saving, the
degradations of the ride and handling performance are
considerable and have to be investigated. This is mainly
because the chassis and suspension components as well as
tires have not been modified to match up with the added
weight and the modification of the vehicles center of
gravity (CoG). The aim of this work is to create and validate
a multibody model of the Land Rover Defender 110 that is
intended to predict the ride and handling limits of the
upgraded version. First, a multibody simulation model of
the vehicle is built using available documentation and
measurement of component parameters. The detailed
simulation procedure, including driver inputs and the
maneuvers performed on the model, are presented.

2. FULL VEHICLE SIMULATION MODEL


The aim of this work is to build and validate a multibody
model of the wheeled road vehicle Land Rover Defender
110 through series of carefully selected ride and handling
tests. The main reason for this choice is the availability of
the vehicle and the fact that the sponsoring institution has
chosen this platform to integrate the surveillance and
reconnaissance equipment. The principal technical
characteristics of the vehicle which are relevant to the
model, as well as moments of inertia about the CoG, are
presented in Table 1. Picture 1 illustrates the position of the
vehicles CoG [9].

In order to achieve this goal, a step-by-step chronological


methodology was performed as follows:
1. Identify the relevant subsystems, parameters and
measures required for the model implementation in
ADAMS/Car and the behavior to be assessed
(subsystem topology, total weight, weight distribution,
moments of inertia, spring and damper characteristics,
anti-roll bar characteristic, steering ratio);
2. Create templates for all considered subsystems and

Picture 1. Defenders CoG position


224

MODELINGANDMULTIBODYSIMULATIONOFLANDROVERDEFENDER110RIDEANDHANDLINGDYNAMICS

Table 1. technical characteristics of the vehicle


Model

DEFENDER 110

Wheel base
Track width
Unloaded Weight
Front suspension
Rear suspension
Axle ground clearance
Tire size
Roll moment of inertia
Pitch moment of inertia
Yaw moment of inertia

2794 mm
1486 mm
2125 kg
Live beam axle
Live beam axle
250 mm
235/85R16
744 kg.m2
2440 kg.m2
2478 kg.m2

OTEH2016

correspondent force-angle curves. Shock absorber forcevelocity curves were determined using a hydraulic press.
The tire radial stiffness was measured with the wheel fixed
at the center and the load was applied on one side of the tire.
Transmission ratio was calculated using a displacement
sensor to measure wheel deviation along with an angle
transducer to record steering wheel angle. All the
aforementioned characteristics were then processed and
stored in an appropriate form to be later used in
MSC.ADAMS/Car.

2.2. Suspension system configuration


The Defenders front suspension system shown on Picture
2-a [10] consists of a live rigid axle connected to vehicle
body by means of two radius arms (1) to provide
longitudinal guidance of the axle and to react to longitudinal
forces. A Panhard rod (2) attaches the axle to the vehicle
body to prevent lateral axle displacement. This
configuration allows the axle to move only up and down
along z-axis, as well as rotation about the x-axis. A pair of
coaxial coil spring (3) and shock absorber (4) are mounted
vertically on each end of the axle. The rear suspension
shown on Picture 2-b has the same configuration. However,
the Panhard rod is replaced with a triangular linkage (5).
This combination forms Roberts mechanism which
provides lateral axle positioning and makes the connection
point between the axle and the linkage (coupler point) to
move on a straight line vertically. Springs (6) are mounted
vertically at each end of the axle, while the shock absorbers
(7) are tilted slightly at an angle about y-axis to introduce
some longitudinal damping. Both front and rear suspension
systems are fitted with anti-roll bars to help reduce body roll
when the vehicle corners.

2.1. Component parameters identification


In order to quantify all the required parameters for the
multibody simulation model, it was necessary to carry out a
number of measurements on the target vehicle. The
suspension topology was determined by estimating the
coordinates of the most representative bushing centers of the
front and rear suspensions in reference to a coordinate
system located at the mid contact point between the wheel
and the ground. The dimensions of the linkages are
measured and drawn into ADAMS/Car to estimate their
mass as well as their inertia properties. Individual spring
stiffness characteristics were measured using a displacement
sensor attached to an axial force transducer to obtain the
corresponding force-displacement curves. Compression
speed was kept small enough to consider quasi-static
loadings and the hysteresis effect was neglected. Front and
rear anti-roll bars were tested in the same way to obtain the

Picture 1. Land Rover Defenders suspension systems. -a- front suspension, -b- rear suspension. 1- radius arm, 2Panhard rod, 3- front coil spring, 4- front shock absorber, 5- triangular linkage, 6- rear coil spring, 7- rear shock
absorber, 8- radius arm.
of the vehicle body;
2.3. Model implementation
3. Default properties are used for all bushings within the
In order to adapt the complexity of the model to the
model;
behavior being assessed, a number of assumptions and
simplifications were adapted and used within the model, 4. Suspension components are considered to be rigid
bodies;
those are:
5. Because of lack of technical documentation, the
1. The vehicle chassis is modelled as a rigid body;
steering subsystem was modelled using a predefined
2. Default values are used for the aerodynamic properties
MSC.ADAMS template.
225

MODELINGANDMULTIBODYSIMULATIONOFLANDROVERDEFENDER110RIDEANDHANDLINGDYNAMICS

The Land Rover Defender is modelled as a multibody


dynamic system using MSC.ADAMS/Car, comprising beam
axle front and rear suspension, steering subsystem, front and
rear wheels and vehicle chassis. The primary purpose of this
model is to simulate the ride and handling behavior of the
vehicle. Therefore, having accurate representation of the
brake system and the powertrain is not meaningful, apart
from considering the effect of their mass properties on the
vehicle behavior. A non-linear MSC.ADAMS Pacejka 2002
tire model [11] was used and fitted with experimental data
to consider vertical dynamic of the tire under normal loads.
The front suspension is modelled as a rigid axle attached to
the vehicle body by means of two longitudinal rods
connected to the chassis by hook joints at the rear end and to
the axle by spherical joint at the front end. The Panhard rod
is modelled as a lateral rod attached to the chassis by hook
joint and to the axle by spherical joint. The steering system
is modelled as a rack pinion system using a predefined
MSC.ADAMS/CAR template and fitted with measured
steering ratio. Each end of the rack is connected to the tie
rod by constant velocity joint (convel joint). The tie rod is
connected to the wheel hub through the steering arm by a
spherical joint. The anti-roll bar is modelled as two separate
bars attached to the axle by spherical joints at the lower ends
and supported by bushings fixed to the vehicle chassis. The
other ends are connected to each other by a torsional spring
which applies moment around the axis joining the two bars.
This moment is function of the angular displacement of the
two bars around the aforementioned axis. The rear
suspension is modelled in a similar way. However, the
Panhard rod was replaced by a flexible square section beam
to be representative of the triangular linkage on the real
vehicle. The beam is attached at the rear end to the middle
of the axle by a spherical joint while the front end is fixed to
the chassis. Coil springs and dampers are modelled with

OTEH2016

nonlinear curves using measured experimental data. The 3D


models of front and rear suspension and the steering system
as well as their kinematic schemes are shown in pictures 3
and 4. In addition, symbols in Picture 4 are painted in the
same colors as their corresponding elements in Picture 3.
The full model has 94 degrees of freedom, 42 moving parts,
18 spherical joints, 12 revolute joints, 9 Hooke joints, 3
translational joints, 2 convel joints, 1 cylindrical joint, 8
fixed joints and one motion defined by the rotation of the
steering wheel.

Picture 2. MSC.ADAMS/Car full vehicle model. -a- front


suspension and steering, -b- rear suspension.

2.4. Driver model


The vehicle is controlled using ADAMS/Car Driving
Machine. Vehicle longitudinal velocity is controlled so that
the actual velocity matches the desired input. The required
acceleration or deceleration to keep the vehicle velocity at
the desired value is expressed as wheel torque or brake that
must be applied to the wheels. Errors in longitudinal
velocity are compensated using controller that calculates a
feedback torque depending on the difference between actual
and the desired velocity.

Picture 4. Kinematic schemes of the suspension systems. -a- front suspension, -b- rear suspension.

226

MODELINGANDMULTIBODYSIMULATIONOFLANDROVERDEFENDER110RIDEANDHANDLINGDYNAMICS

The vehicle lateral control is achieved considering a


simplified bicycle model of the actual vehicle model in
order to identify the relationship between the path curvature
and the necessary steer angle input. Whenever the
simulation model deviates from the target path because of
the bicycle model simplification, a connecting contour is
built to bring back the vehicle to the target path. The vehicle
is steered with the parameters from the bicycle model and
the yaw rate of the simulation model is corrected using a
feedback controller. The latter calculates the error between
the yaw rates of the simulation model and the bicycle
model, so that the simulation model follows the contour as
closely as possible.

OTEH2016

2.5. Bump test


This test is used to evaluate vertical and pitch dynamic of
the vehicle negotiating a discrete obstacle (Picture 5) for
speeds ranging from 15 to 50 km/h. The test consists of
driving the vehicle straight ahead on a flat road at the
desired longitudinal speed in order to reach a steady state
before negotiating the obstacle. The vehicle is carefully
tuned so that the transient event begins at 0-degree pitch
angle. The vehicle is set so that the front wheels
simultaneously make first contact with the obstacle. No
brake is applied during the test. The bump test finishes
when the vehicle regains its initial steady state.

2.4. Driver model


The vehicle is controlled using ADAMS/Car Driving
Machine. Vehicle longitudinal velocity is controlled so that
the actual velocity matches the desired input. The required
acceleration or deceleration to keep the vehicle velocity at
the desired value is expressed as wheel torque or brake that
must be applied to the wheels. Errors in longitudinal
velocity are compensated using controller that calculates a
feedback torque depending on the difference between actual
and the desired velocity. The vehicle lateral control is
achieved considering a simplified bicycle model of the
actual vehicle model in order to identify the relationship
between the path curvature and the necessary steer angle
input. Whenever the simulation model deviates from the
target path because of the bicycle model simplification, a
connecting contour is built to bring back the vehicle to the
target path. The vehicle is steered with the parameters from
the bicycle model and the yaw rate of the simulation model
is corrected using a feedback controller. The latter calculates
the error between the yaw rates of the simulation model and
the bicycle model, so that the simulation model follows the
contour as closely as possible.

Picture 5. Discrete obstacle profile.

2.5. Double Lane Change test


The double lane change manoeuver is used to evaluate
handling and stability of the vehicle in lateral dynamics. The
manoeuver consists of switching the vehicle from one lane
to another and back to the initial lane without hitting any of
the cones defining the track (Picture 6). Depending on
vehicle width, the different dimensions of the double lane
change track used in this paper are defined by STANAG4357 [12]. Steady state should be reached before the vehicle
crosses the first section of the test. The vehicle is driven
along a straight line at the desired speed. Pitch and roll
angles are carefully controlled so that the vehicle begins the
test at 0-degree pitch and roll angles.

Picture 6. Double lane change test track.


crossing a bump obstacle at 30 km/h. From the diagrams, it
After entering the first section of the test track, throttle is
is evident that vehicle model reproduces the expected
maintained and the test is accomplished at constant velocity.
behavior in term of curve shape, especially the suspension
The test finishes when the vehicle regains its initial steady state.
deformations and vertical chassis acceleration. Picture 8
indicates the simulation results obtained for the vehicle
3. SIMULATION RESULTS
performing a double lane change maneuver at 70 km/h. As
can be seen from the diagrams, the simulation model truly
The MSC.ADAMS model was tested by analyzing the
predicts the behavior of the vehicle with regards to the
relevant parameters of the real vehicle negotiating a bump
nature of the considered test, especially simulated roll
obstacle at 20, 30 and 40 km/h and performing a double lane
velocity and lateral acceleration, which are significant for
change manoeuver at 50, 60 and 70 km/h. Picture 7
handling assessment.
represents the obtained results for the vehicle model
227

MODELINGANDMULTIBODYSIMULATIONOFLANDROVERDEFENDER110RIDEANDHANDLINGDYNAMICS

Picture 7. Simulation results for a bump test at 30 km/h.

228

OTEH2016

MODELINGANDMULTIBODYSIMULATIONOFLANDROVERDEFENDER110RIDEANDHANDLINGDYNAMICS

Picture 8. Simulation results for a double lane change manoeuver at 70 km/h.

229

OTEH2016

MODELINGANDMULTIBODYSIMULATIONOFLANDROVERDEFENDER110RIDEANDHANDLINGDYNAMICS

OTEH2016

non-linear vehicle dynamics simulation for limit


handling, Proc. Inst. Mech. Eng. Part J. Automob.
Eng., vol. 216, pp. 319327, 2002.
[6] Allen, R., Rosenthal, T., Requirements for vehicle
dynamics simulation models, SAE Tech. Pap. Ser. No
940175, 1994.
[7] Kim, J., Effect of vehicle model on the estimation of
lateral vehicle dynamics, Int. J. Automot. Technol.,
vol. 11, no. 3, pp. 331337, 2010.
[8] Letherwood, M., Gunter, D., Ground vehicle modeling
and simulation of military vehicles using high
performance computing, Parallel Comput., vol. 27, no.
12, pp. 109140, 2001.
[9] Uys, P. E., Els, P. S., Thoresson, M. J., Voigt, K. G.,
Combrinck, W. C., Experimental determination of
moments of inertia for an off-road vehicle in a regular
engineering laboratory, Int. J. Mech. Eng. Educ., vol.
34, no. 4, pp. 291314, 2006.
[10] Workshop manual supplement and body repair manual.
Publication No. LRL 0410ENG, 2nd Edition. Land
Rover, 2001.
[11] Pacejka, H. B., Tire and Vehicle Dynamics, 2nd ed.
Butterworth-Heinemann, 2002.
[12] NATO, STANAG 4357-ALLIED VEHICLE
TESTING PUBLICATIONS (AVTPS). 1991.

5. CONCLUSION
This paper deals with the development of a multibody
vehicle model that can be used to predict vehicle dynamic
performance of a real vehicle.
A multibody simulation model of the Land Rover Defender
was developed in MSC.ADAMS/Car using vehicle
parameter measurement. The 94-degree of freedom
simulation model was tested when performing bump test
and double lane change maneuver at various speeds. The
obtained results truly reproduce the expected behavior. The
developed model will be validated later against
experimental data using an instrumented vehicle.

References
[1] Weber, J., Automotive Development Processes. Berlin:
Springer, 2009.
[2] Wong, J. Y., Theory of ground vehicle, 4th ed. New
Jersey: Jon Wiley & Sons, 2008.
[3] Gillespie, T. D., Fundamentals of vehicle dynamics.
Warrendale, PA 15096-0001: Society of Automotive
Engineers, Inc., 1992.
[4] Pazooki, A., Rakheja, S., Cao, D., Modeling and
validation of off-road vehicle ride dynamics, Mech.
Syst. Signal Process., vol. 28, pp. 679695, 2012.
[5] Allen, W. R., Chrstos, J., Howe, G., Validation of a

230

PERSPECTIVES OF USE OF SWITCHED RELUCTANCE MOTORS IN


COMBAT VEHICLES
RADOSLAV RUSINOV
Technical University, Sofia, rusinovr@abv.bg
ILIYAN HUTOV
Joint Forces Command, Sofia, iliyan.hutov@gmail.com
RADI GANEV
University of Structural Engineering & Architecture, Sofia, radiganev@abv.bg
KOSTADIN LAZAROV
Technical University, Sofia, klazarov@tu-plovdiv.bg

Abstract: The article makes a proposal for the use of switched reluctance motors for military transport and combat vehicles.
They are compared in terms of reliability, energy source, energy consumption, size and weight. The advantages of switched
reluctance motors and the prospects for their use are exhibited. A principle solution for drive operation is proposed.
Keywords: switched reluctance motors, electric vehicle, motor control, combat vehicle.
for electrical power for future military systems onboard a
vehicle. The power management and distribution system can
supply continuous power to meet such loads as propulsion,
thermal management and other small power consumers and
can also be used to supply the intermittent power to
drive/charge a pulsed power system for electric weapons
(ETC gun and DEW) or EM armor. Furthermore, the
availability of these high levels of electrical power onboard,
may be used to reduce the logistical burden of the fleet by
eliminating, in certain instances, the towed generators,
necessary to provide electric power in the field.

1. INTRODUCTION
There are several potential benefits of electric drives that are
moving the technology advancement to civil and military
applications. While some of the payoffs are common to civil
and military markets, there are some unique to both
applications as shown in Table 1.
Table 1. Electric Vehicle Benefits
Military
Civil
Vehicle Packaging
Flexibility
Fuel Economy
Fuel Economy
Onboard Power Generation
Reduced Emissions
Stealth Potential (Silent
Improved Drivability
Movement)
Improved Accelerations
Improved Accelerations
Reduced Maintenance Cost
Reduced Maintenance
Increased Silent Watch
Period

Stealth Potential (Silent Movement): The significant


onboard energy storage system can be used to meet silent
watch requirements for extended periods of time for various
missions. Depending on the power requirements of the
silent watch, a mission can be extended over a few hours,
far exceeding the silent watch capability of the current
fleets. Silent mobility over a limited distance is also
achievable where the vehicle can move in or out of a hostile
territory with a reduced chance of being detected.

For military applications the most tangible benefits are:


1) Fuel economy,
2) Available power onboard,
3) Silent watch and silent mobility,
4) Flexibility of component packaging and integration,
5) Enhanced diagnostic and prognostics.

Flexibility: An electric drives system consists of modular


components connected by cables thus giving the vehicle
designers more designing freedom. This avoids the
constraints of conventional mechanical drive systems, which
require the engine to be connected to the wheels via gear
boxes and rigid shafts. This means the components can be
arranged and integrated in the vehicle for the optimum
utilization of the available space.

Available power onboard: The electrical drive system


consists of two sources of power: the engine-generator and
the energy storage system. The main power management
and distribution system can be designed and sized to meet
the demand of all electric power consumers in the vehicle.
This is extremely beneficial due to the increasing demand

A source of electrical powered unit with electric motors,


compared to internal combustion engines, is more reliable,
fault-tolerant, easier to repair and with lower operating costs.
The electric motors have a higher coefficient of efficiency, a
much better performance, are easier to manage [1]. They have
231

OTEH2016

PERSPECTIVESOFUSEOFSWITCHEDRELUCTANCEMOTORSINCOMBATVEHICLES

Typical for this type of engine is the phase switching


necessity. The control unit could be electronic (Fig. 2) [2] or
mechanical.

also, greater flexibility in the deployment source of energy


compared to the fuel tanks. The drive with electric motors
saves a number of facilities such as transmissions, decreases
significantly the requirements to the cooling and lubrication
systems.

In case of failure of the electronic unit, phases switching


could be realized by mechanical commands controller and
mechanical contacts. This enables an emergency travel on
short distances.

Because of number of features, the machines center of


gravity is high, which leads to some limitations in
maneuverability and their resistance. The electric motors are
suitable for incorporation in the machine wheels. This
determines a lower center of gravity, greater stability and
maneuverability.

An advantage of the topology of the inverter (Fig. 3) for


feeding the SRM is that it is impossible to obtain a short
circuit.

2. MAIN
The switched reluctance motors (SRM) are characterized by
simple construction, lack of coils and magnets in the rotor,
lack of mechanical switches, maintainability, reliability,
wide range of speed control, high speeds, high starting
torque, overload capacity. They are suitable for work in
conditions of high temperatures, dust, explosive
environments, and military applications. They also have
some disadvantages, such as pulsations of the moment.

Figure 3. Inverter for supply voltage of the stator


windings.
In military vehicles the reliability is crucial. In case of
electronic switching block failure, the stator voltage is
supplied by a mechanical switch (command controller)
driven by an operator. In emergency mode, the engine,
should deliver 80% of the nominal power.
Advantages concerning the reliability - no mechanical
switch, the stator and the rotor have simple construction,
which increases the meantime before failure (MBF).
Figure 1. Switched reluctance motor

Such type motors are very suitable for gearless drives. They
have high torque at low speeds. Apparent from the
expression (1), the motor shaft torque depends on the square
of the speed:

The number of poles of the SRM may vary. Fig. 1 shows an


engine with configuration 6-4 (six poles in the stator and
four poles in the rotor). Pulsation of the motor torque
depends on the number of poles.

M =

SRM can work in the absence of phase (phase winding


interruption or failure in one of the power switches).

dL ( .i ) i 2
d 2

(1)

From the expression of the torque some conclusions could


be made [3]:
Torque is proportional to the square of the current, so it
does not depend on the current direction. This determines
the use of the unipolar power source, which allows
reducing the number of electronic switches in the inverter.
Due to the fact that the torque of SRM depends on the
square of the current, the mechanical characteristic
resembles that of a DC motor with excitation in series, i.
e. has a large starting torque.
The change of rotation direction is carried out by
changing the sequence of the feeding pulses to the motor
phases, which is simple for implementation.

Figure 2. SRM 6/4 an electronic switch of the stator


voltages.

SRM can operate in four quadrants of the mechanical


232

PERSPECTIVESOFUSEOFSWITCHEDRELUCTANCEMOTORSINCOMBATVEHICLES

characteristics, i. e. to realize braking modes by returning


energy to the source.

OTEH2016

3. CONCLUSION
The electric vehicles (EV) are existing technologies but
many development efforts are still needed to put valuable
products on the commercial market. The demand of
development and design effort in the field of drives, energy
sources and charging infrastructure is becoming enormous
and a challenging field for the European Community.

Easy implementation of rotation speed management.

The future of the access to energy, the environmental and


climate problems and the need to solve the mobility
problems in the urban warfare are fields in which the
electric vehicles offer a large number of interesting and
adequate solutions.

References
[1] Rusinov,E.: Application of Adaptive Digital Filter for
Rotor Position Determination of Switched Reluctance
Motor, Automatics and Informatics, Sofia, 2014
[2] http://machinedesign.com/
[3] Krishnan,R.: Switched Reluctance Motor Drives, CRC
Press, 2001
[4] All Electric Combat Vehicles (AECV) for Future
Applications, RTO TECHNICAL REPORT, TR-AVT047, 2004.

Figure 4. Graphic of SRM mechanical characteristic.


Motor power: the drive system should be approximately 1 to
1.5 MW for a 50/60-ton battle tank and 200/300 kW for a
10/20-ton combat-support vehicle [4].
The addition of power will allow:
Increase of acceleration at all speed ranges;
Increase of the maximum speed in all kinds of terrain and
grades;
Stealth operations (limited silent mobility and silent
watch).

233

SECTION IV

Ammunition and Energetic Materials

CHAIRMAN
Slobodan Jaramaz, PhD
Nikola Gligorijevi, PhD

PHYSICO-CHEMICAL PROPERTIES AND THERMAL STABILITY OF


MICROCRYSTALLINE NITROCELLULOSE ISOLATED FROM WOOD
FIBER
MOHAMMED AMIN DALI
Military Academy, University of Defence, Belgrade, PhD researcher, dalima380@gmail.com

Abstract: The nitrocellulose (NC), which was discovered in 1832, has a great importance in the military field. In fact, it is
considered as the quasi-universal material for gunpowder, double base propellants and composite double base propellants.
The purpose of this study is divided in two parts. The first part is to improve the performances and properties of
microcrystalline nitrocellulose samples that are prepared in the laboratory and to compare them with conventional
nitrocellulose during the artificial ageing: using FTIR and XRD. In the second part, the aim is to improve the thermal
stability of nitrocellulose samples with two kinds of stabilizers (ethyl centralite and N-(2-acetoxyethyl)-p-nitroaniline which
was tested via DSC, using 3 % by weight of the stabilizer for each preparation. All samples are subjected to artificial
ageing. Then, the thermal degradation is evaluated using non isothermal kinetic models in order to investigate the thermal
degradation energy variation (activation energy, Ea). In addition, the present work has demonstrated that DSC
measurements allow to evaluate the actual state of the nitrocellulose and to prove that this analytical technique is capable to
distinguish the differences in thermal decomposition processes in nitrocellulose during ageing as well as to determine the
storage time.
Keywords: wood, cellulose, nitrocellulose, microcrystallinity, ageing.

1. INTRODUCTION
On the 19th century, the black powder was the only
explosive substance used. However, the invention of
nitrocellulose by Henri-Braconnot in 1832 terminated the
use of black powder. This invention was the cause to start a
renovation in the field of energetic materials. This polymer
synthesized by nitration of the cellulose became popular not
only by its military applications as an ingredient in
formulations of powder, propellants and pyrotechnic
compositions, but also in the civilian sector like paintings,
photography and cosmetics. The importance and the
frequent use of nitrocellulose in the formulation of energy
products caused to start many studies on the phenomena of
degradation of the substance, which made the attribution of
causes it to nitrogen oxides freed from the nitrate esters
during storage at room temperature, that is why, Alfred
Nobel used the DPA as a stabilizer in the nitrocellulose.
In this study, the nitrocellulose sample was obtained from
laboratory by nitration of the cellulose which is itself
isolated from the wood fibers (the pinewood).

2. EXPERIMENTAL PART
2.1. Obtaining the native cellulose and micro
crystalline from wood

Picture 1. Protocol for obtaining the native cellulose and


microcrystalline from wood fibers

Obtaining cellulose fibers and microcrystalline cellulose


from wood requires a set of treatments and pretreatments as
it is given in Picture 1, [1, 2, 3]

237

PHYSICOCHEMICALPROPERTIESANDTHERMALSTABILITYOFMICROCRYSTALLINENITROCELLULOSEISOLATEDFROMWOODFIBER

OTEH2016

The FTIR spectrum interpretation of: softwood (BR), pin wood


after soxlhet extraction (BR-ES), hemicellulose (HC-R), native
cellulose (CR) and commercial cellulose (CR-O) (Picture 5),
shows the existence of bands of absorbance hemicellulose at
1732 cm-1, the two peaks, appearing at 1610 cm-1 and 1512 cm1
, correspond respectively to carbonyl groups (C=O) and to
stretching vibrations of (C=C) which are special groups of
lignins. The comparison of this spectrum with the spectrum of
the intermediate treatments shows the complete elimination of
hemicelluloses by alkaline treatment. This disposal is justified
by the disappearance of the peak at 1732 cm-1. [4, 5]

Picture 2. Wood blocks after grinding

Picture 2. Pure native cellulose (a) from pine wood (b)


commercial cellulose

Picture 5. FTIR specters of commercial and synthesized


cellulose
The spectrum superposition of the commercial cellulose
with the synthesized cellulose enables the identification and
confirms that they have the same nature of the commercial
cellulose. [4, 5]

2.2. Obtaining the original nitrocellulose and


microcrystalline nitrocellulose
Nitration samples of native and microcrystalline commercial
and synthesized cellulose, are produced by a sulfonitric
mixed acid by a weight / volume ratio (w/v)=(1g/30ml)
[7-9]. After treatment of stabilization during 24 hours, the
nitrogen content of each sample is given in the table [8]:

Picture 3. Pure native microcrystalline cellulose (a) from


pine wood (b) commercial microcrystalline cellulose
In picture (3,4) it can be seen that there is no difference
between synthesized samples and commercial samples.

Table 1. Nitrogen content of nitrocellulose sample (NCR)


softwood nitrocellulose, (NCMCR) softwood microcrystalline nitrocellulose, (NCCOM) commercial nitrocellulose native made in china, (NCMCCOM) commercial
microcrystalline nitrocellulose.

Picture 4. IR specters of intermediate pretreatment of the


pin wood.
238

Sample

Nitrogen content (%)

NCR

12,24

NCMCR

12,84

NCCOM

13,42

NCMCCOM

12,98

PHYSICOCHEMICALPROPERTIESANDTHERMALSTABILITYOFMICROCRYSTALLINENITROCELLULOSEISOLATEDFROMWOODFIBER

OTEH2016

Picture 11. Oven used for the aging

Picture 8. X-ray diffractograms of the


nitrocellulose samples
The Previous diffractograms show that the synthesized
nitrocellulose of both type of cellulose (native and
microcrystalline) has an identical structure to commercial
nitrocellulose. [10]

Samples are taken for three times (0 day 8 days and 15


days) to be analyzed by FTIR, XRD and DSC.

3. RESULTS AND DISCUSSION


3.1. Aging effect on chemical functions

2.3. Kinetics Study of the degradation synthesized


and stabilized nitrocellulose

The evolution of the chemical function during accelerated


aging is followed by FTIR.

In order to illustrate the role of stabilizers, (NC-Stab) thin


films have incurred accelerated aging at a temperature of
80 C for a period of 15 days.
The preparation of (NC-Stab) thin films is performed using
the following proportions of the four varieties nitrocellulose
(NCCOM, NCMCCOM, NCR, NCMCR), with 3% of
stabilizer (ethyl centralite EC and N-(2-acetoxyethyl)-pnitroaniline ANA). Each mixture is solubilized in in
acetone. [12]

Picture 12. NC COM infrared specters during aging.


Picture 9. Chemical structure using the stabilizers

Picture 10. Thin films of the mixtures NC+stab,


(a) NC+ANA, (b).NC+EC.
All samples are cut and placed in tubes in order to realize an
accelerated aging in a thermostatically bath, controlled for
15 days at 80C. [12]

Picture 13. NCR infrared specters during aging

239

PHYSICOCHEMICALPROPERTIESANDTHERMALSTABILITYOFMICROCRYSTALLINENITROCELLULOSEISOLATEDFROMWOODFIBER

OTEH2016

Picture 14. NCMC COM infrared specters during aging


Picture 17. NCMCCOM XRD diffractograms
during aging

Picture 15. NCMCR infrared specters during aging.


According to infrared specters during aging, we can see an
attenuation of the intensities of the (O-NO2) function for all
nitrocelluloses samples, and the appearance of (C=O)
functions at 1714 cm-1, it is logical saw thermal degradation
of nitrocellulose, according to the reaction [6]:

Picture 18. NCR XRD diffractograms during aging

> CHONO 2 > C = O + HNO 2

3.2. Aging effect on the crystallinity:


For understanding the effect of aging on crystallinity
nitrocellulose samples, they have experienced by XRD
analysis during aging every 5 days.

Picture 19. NCMCR XRD diffractograms during aging


The diffractograms of the nitrocellulose samples (NCR,
NCMCCOM, and NCMCR) indicate a disappearance of the
principal peak of the crystallinity. However, in the
commercial nitrocellulose (NCCOM), the crystallinity is
partially preserved at the end of aging. This disappearance
of the peaks of synthesized samples has the same source. It
is faster in the case of conventional (NCR) compared to
microcrystalline (NCMCCOM and NCMCR). This
difference appears more clearly in the diffractograms after

Picture 16. NCCOM XRD Diffractograms during aging


240

PHYSICOCHEMICALPROPERTIESANDTHERMALSTABILITYOFMICROCRYSTALLINENITROCELLULOSEISOLATEDFROMWOODFIBER

10 days. The evolution of the crystallinity of synthesized


nitrocellulose samples differs from the commercial one due
to processing and stabilization conditions. [10]

According to Table 4, the difference between the native and


microcrystalline nitrocellulose, stabilized and not stabilized,
can be noticed clearly. After 15 days of accelerated aging,
stabilized nitrocellulose samples are more stable than nonstabilized sample saw degradation temperatures, the
decrease in enthalpies of degradation of the non-stabilized
samples is more important than in the stabilized samples,
which confirms the importance of using a stabilizer during
the storage period. it can be seen also that the sample of
NCMCCOM has a decrease in temperature and enthalpie of
degradation, less than the sample of NCR no-stabilized after
15 days of aging (see Table 3). This indicates that the
microcrystalline nitrocelluloses are more stable than
conventional nitrocelluloses [10, 11].

3.3. Aging effect on the enthalpy of degradation


and degradation temperature:
Using the DSC method, variations in enthalpy of
degradation and degradation temperature are followed. The
obtained results are summarized in Tables 2 and 3.
Table 2. Temperatures and degradation enthalpies of NC
unaged samples (=5C/min)
Sample
NCR
NCMCR
NCCOM
NCMCCOM

Enthalpy of
degradation (J/g)
1184,281
1296,352
1563,603
1296,508

Degradation
temperature (C)
203,90
203,64
203,19
203,49

3.5. Determination of kinetic parameters and


storage time
The determination of activation energies and preexponential factors for aged samples are accomplished by
Kissinger's (1) and Ozawa's (2) method (see Table 5). [11]

Table 3. Temperatures and degradation enthalpies of NC


samples after 15 days of aging (=5C/min)
Sample
NCR
NCMCR
NCCOM
NCMCCOM

Enthalpy of
degradation (J/g)
236,856
907,447
938,162
848,254

OTEH2016

Degradation
temperature (C)
183,96
180,7
189,77
182,24


Ea
ln 2 = ln A R
E
R
Tm

a
Tm

(1)

Ea
R Tm

(2)

ln ( ) = 2,315 9, 4567

The values of activation energies (Ea), pre exponential


factors (Log (A)), the rate constants (Log(K)) and critical
temperatures of explosion (Tb) for nitrocellulose samples,
stabilized and non-stabilized, are obtained by the linear
least-squares and iterative methods using MATLAB and
mentioned in the Table 5. [10-12, 14]

From the Table (1.2), it can be clearly seen that both


thermodynamic parameters (enthalpy of degradation and
degradation temperature) decrease during aging. This is
justified by accelerating the degradation phenomenon due to
the nitrogen oxides accumulation which makes the
nitrocellulose unstable (low temperature of degradation) and
less energetic because of the disappearance of the nitro
groups (NO2). The decrease in enthalpies degradation is
important in the case of conventional nitrocellulose relative
to those microcrystalline, it means that for a long period, the
crystallinity and the reduction of polymerization degree,
affect the stability and the energy properties of
nitrocellulose. [11]

Table 5. Kinetic parameters of nitrocellulose samples with


and without stabilizer after 15 days aging via Kissinger's
method
Ea
(KJ/mol)
NCCOM
104,16
NCCOM+EC
127,97
NCCOM+ANA 135,17
NCR
89,55
NCMCCOM
95,55
NCMCR
106,04
NCMCR+ANA 135,17
NCMCR+EC
138,54
Sample

3.4. Stabilizer effect on aging


Via the DSC method, the effect of stabilizer is studied on
nitrocellulose aging. The variation in enthalpy of
degradation and degradation temperature was followed for
NC samples with and without stabilizer. The obtained
results are summarized in Tables 4.

Log(A)

log K

Tb (K)

9,05
12,13
12,39
7,98
8,74
10,07
13,39
13,62

8,74
10,30
10,75
7,71
8,00
8,56
10,29
10,65

477,16
483,04
481,87
481,27
477,67
466,52
470,54
472,52

Table 6. Kinetic parameters of nitrocellulose samples with


and without stabilizer after 15 days aging via Ozawa's
method

Table 4. Temperatures and enthalpies of degradation of


NC+stab samples after 15 days of aging (=5C/min)

Ea
Log(A)
(KJ/mol)
NCCOM
106,55
//
NCCOM+EC
129,24
//
NCCOM+ANA 136,35
//
NCR
92,56
//
NCMCCOM
98,23
//
NCMCR
108,13
//
NCMCR+ANA 135,88
//
NCMCR+EC
139,15
//

Enthalpy of
Degradation
degradation (J/g) temperature (C)
NCMCR+EC
1102,096
187.09
NCMCR+ANA
1223,096
183.07
NCCOM+EC
1012,305
195,23
Enthalpy of
Degradation
Sample
degradation (J/g) temperature (C)
NCCOM+ANA
999,327
195,02
NCMCCOM
848,254
182,24

Sample

Sample

241

log K

Tb (K)

//
//
//
//
//
//
//
//

//
//
//
//
//
//
//
//

PHYSICOCHEMICALPROPERTIESANDTHERMALSTABILITYOFMICROCRYSTALLINENITROCELLULOSEISOLATEDFROMWOODFIBER

It can be clearly seen that both methods provide similar


results, and the stabilized samples, after 15 days, remain
more stable than the non-stabilized samples.

II: Physicochemical properties, Carbohydrate


polymers , 83, 676-687, 2011
[3] Goh,C.S. et all: Evauation and optimization of
organosolv pretreatment using combined severity
factors and response surface methodology, Biomass
and Bioenergy ,35, 4025-4033, 2011
[4] Kavalenko,V.V.I. et all: Interprtation of the IR
Spectrum and structure of cellulose nitrate, Journal of
Structural Chemistry, 34, 540-547, 1993.
[5] Vainio,U.: Characterisation of cellulose and ligninebased materials using X-Ray scattering methods, srie
de rapport physique ,Univerit of Helsinki , finland ,
2007.
[6] Quye,A. et all: Investigation of inherent degradation in
cellulose nitrate cmuseum artefact, Polymer
Degradation and Stability 96, 1369-1376, 2011.
[7] Trache,D., Khimeche,K.: Synthesis and caracterzation
of nitrocellulose microcristalline from esparto grass,
43 rd International Annual Confrence of Energetic
Matrial,1-8 ,Germany, 2012.
[8] Nickel,R.R., Walkers,R.R.: Method for manufacture of
microcristalline nitrocellulose, European patent
application Brevet, 1886983, 2006 .
[9] Hermann,M. et all: Microstrusture of nitrocellulose
investigated by X-RAY diffraction, Internationnal
Annual Conference , Fraunhofer Institut for Chemische
Tecnologie, 42-53, Germanny, 2011.
[10] Savizi,M.R.: et all: Effect of particle size on thermal
decomposition of nitrocellulose, Journal of Hazardous
Matrials,168, 1134-1139, 2009.
[11] Lin,C.P. et all: Comparaison of TGA and DSC
approches to evaluate nitrocellulose thermal
degradation energy and stabilizer efficiencies, Process
safety and environnemental protection, 88, 413-419,
2010.
[12] Trache,D., Khimeche,K.: Study on the influence of
ageing of thermal decomposition of double base
propllants and prediction of their in use- time, Fire
Mater, 37, 328-336, 2013.
[13] Trache,D., Khimeche,K.: Thermal behaviour of
nitrocellulose during ageing and stabilizers efficiency,
44 th international annual conference of nergtic
matrials , Germany ,1-12, 2013.
[14] Poumartazavi,S.M. et all: Effect of nitrate content on
thermal decomposition of nitrocellulose, Journal of
Hazardous Materials , 162, 1141-1144, 2009.

Moreover, there is an experimentally proved equation,


which is based on the law of Van 't Hof and used for the
simulation of accelerated aging to natural aging. It is
defined by the following expression [13]:
TT TE

TF
tE = tF F
365, 25

(3)

Where:
Ea TF

F =e

RYTT2

(4)

(tE) is the storage time in years, (TT) is the aging time in


days, (TE) is the storage temperature (25C), (TT) is the
aging temperature, (F) is the change factor of the reaction
rate by the temperature change (TF=10C). Table.7
contains the NCCOM and NCMCR storage times.
Table 7. Storage time for NCCOM and NCMCR
Sample
NCCOM
NCCOM +Ec
NCCOM+ANA
NCMCR
NCMCR+ Ec
NCMCR+ ANA

OTEH2016

storage time (years)


28,303
54,196
46,435
29,384
63,832
53,374

It is clearly seen that the role of stabilizers in increasing the


lifetime of energy products is based on nitrocellulose.

4. CONCLUSION
Accelerated aging of nitrocellulose shows the positive
contribution of using microcrystalline nitrocellulose,
isolated from wood fibers, for the production of ammunition
saw stability and the slow evolution of substance properties
during its aging. Besides, it was justified by different
analysis such as: XRD, DSC and FTIR.

References
[1] Trache,D., Khimeche,K.: Physico-chemical properties
and thermal stability of microcristalline cellulose
isolated from Alfa fibers, Carbohydrate Polymers, 104 ,
223230, 2014.
[2] Adel,A.M.: Characterization of microcrystalline
cellulose prpared from lignocellulosic matrials.Part

242

A METHOD OF GUNPOWDER GRAIN SHAPE OPTIMIZATION


STEFAN JOVANOVI
Military academy, University of Defence, Belgrade, Generala Pavla Juriia turma br. 33, 064/466-07-81
stefan.jovanovic.tesla@gmail.com

Abstract: In paper, optimization of the gunpowder grain shape and interior ballistic calculation using modified
gunpowder grain is given. Using mathematical model of classic interior ballistic calculation and condition of constant
pressure of the gunpowder gases during burning process of the gunpowder, function of burning surface of the
gunpowder grain is determined. Analyzing function of burning surface, dimensions and shape of modified gunpowder
grain had been determined. In order to get the results, classic interior ballistic calculation is applied for both
gunpowder grain which is in current use and modified gunpowder grain optimized for usage as a composition of the
gunpowder charge of the long range rifle 12.7mm M93. Comparison of the results between modified gunpowder grain
shape and existing one is also given. Gunpowder charge consisted of modified gunpowder grain shape can significantly
increase exploiting resources of certain weapon system without any changes of weapon and ammunition construction.
Modified gunpowder grains with optimized dimensions can be used in any weapon system based on the same firing
principle with similar results regardless of the projectile type.
Keywords: gunpowder, gunpowder grain.
grains are the most influencing characteristics of the
gunpowder charge in process of firing a projectile,
regardless of the projectile type. Transformation from
solid state into gas state happens across the surfaces of the
gunpowder grains that are caught with flame, [1]. All
those surfaces together are called, burning surface of the
gunpowder grain. Amount of the gases created during
burning process of the gunpowder is directly proportional
to the burning surface of the gunpowder grain. As the
projectile moves down the barrel of the weapon, volume
behind that projectile is increasing. At the same time,
gunpowder burns behind projectile therefore creating
gunpowder gases. If it is so, then according to the
equation of state for real gases, velocity of the projectile
and geometrical features of the gunpowder grain,
determine how the pressure is going to change during
burning process of the gunpowder, [2].

1. INTRODUCTION
Velocity and maximum pressure behind projectile are the
most important features of the weapon system. Higher
muzzle velocity is preferred for almost every weapon
system because it increases weapons range and can
increase penetrating capabilities of certain projectiles
while maximum pressure should be as low as possible
because it affects negatively on barrel characteristics.

There are currently two types of the gunpowder grains,


progressive and degressive ones, [1]. Progressive type
during first phase of the burning process, increases its
burning surfaces as it burns, therefore creates greater
pressure increase. After first phase of the burning process,
it continues to burn like any other degressive gunpowder
grain by decreasing its burning surface as it burns, [1].
Diagram of relative burning surface presented as a
function of relative burnt distance y can be used to
describe rules under which certain shape of the
gunpowder grain burns. Relative burning surface is
relation between burning surface in certain moment and
burning surface of the gunpowder grain at the beginning
of the burning process. Relative burnt distance is relation
between distance that flame crossed and smallest
dimension of the gunpowder grain 2r0 , [1]. On picture 2a
one example of degressive gunpowder shape, cylindrical

Picture 1: Sketch of the system on the beginning (a) and


during (b) burning process of the gunpowder
Features of the gunpowder charge in combination with
mass of the projectile determine shape of the pressuretime curve during firing process, [1]. Integral of the
pressure with respect to time represents impulse of the
projectile, [1]. Shape and dimensions of the gunpowder
243

OPTIMIZATIONOFTHESHAPEANDINHIBITEDSURFACESOFPROGRESSIVEGUNPOWDERPARTICLESINPURPOSEOFIMPROVING

shape in this case, can be seen while on picture 2b one


progressive gunpowder grain shape, cylindrical sevenperforation gunpowder grain shape is shown.

OTEH2016

and modified one for specific weapon system is also


given.

2. MATHEMATICAL MODEL

Progressive gunpowder grains are usually more efficient


than degressive, but they are not always the best solution
as a composition of gunpowder charge because of their
ability to create greater pressure increase compared to the
degressive ones, which is not suited for some weapon
systems. Also because of their more complicated
geometry compared to the degressive ones, in certain
gunpowder charges, like gunpowder charges for pistol
ammunitions, it is very difficult to make them correctly.

In order to determine shape and dimensions of the


gunpowder grain that fulfills condition of constant
pressure during burning of the gunpowder charge, it is
necessary to use certain mathematical model. Because of
the simplicity of the mathematical apparatus, classic
interior ballistic model is used.
2.1. Classic interior ballistic model

However, both types create high pressure in order to


achieve high muzzle velocity. In most cases that pressure
is above 2500 bars and in some cases can go over 5000
bars. In order to withstand that amount of pressure, barrel
needs to be thick enough. This parameter affects barrel
characteristics a lot, especially barrels life, [3]. It is
important to mention that this type of pressure change has
impulsive nature because of the rapid pressure increase
during firing process. Same velocity can be achieved if
the pressure on the pressure time curve is held at the
constant value during burning process, [4]. Except that,
maximum pressure of the gunpowder gases in that case
would be much lower compared to the pressure created
using gunpowder charge consisted of any existing
gunpowder grains.

Model is based on simplified picture of the firing process.


This model uses geometrical law of burning which allows
that the gunpowder charge can be seen as only one
gunpowder grain instead of all. It is also based on several
hypotheses which combined with geometrical law of
burning, were used to create system of seven equations
for overall process, [5]. Hypotheses used by this model
are:
1) All gunpowder particles are homogenous both in
chemical and physical term. All chemical
imperfection, inaccurate dimensions and shape of the
gunpowder grain are neglected
2) All gunpowder grains are caught with flame at the
same time and momentarily. This means that all
gunpowder grains starts to burn at a same time and
that burning starts on the burning surfaces of the
gunpowder grain.
3) There are no differences of the burning speed across
all burning surfaces of the gunpowder grain.

In order to maintain pressure on constant level, creation of


the gunpowder gases should increase over time. If it is so,
then according to previous claims, burning surface of the
gunpowder grains must constantly increase during whole
burning process of the gunpowder, or in other words, total
progressive gunpowder grain should be used. Diagram of
relative burning surface of such a gunpowder grain can be
seen on the picture 2c. There are no such gunpowder
grains in current use.

First three hypotheses are known as geometrical law of


burning, [1].
1) In volume behind projectile, burning of the
gunpowder grains is determined by geometrical law
of burning.
2) When projectile starts to move, pressure in volume
behind projectile is at constant value ( p0 )
3) Barrel of the system is not movable and projectile
speed is determine relative to the barrel
4) Surface at any cross section of the barrel is constant,
rifling and transiting cone are neglected
5) Air drag in front of the projectile is neglected
6) Heat transfer between barrel and gunpowder gases is
neglected
7) Coordinate system is connected to the bottom of the
projectile
8) Projectile rotation, friction on transiting cone,
movement of gunpowder and gunpowder gases and
recoil of the weapon system parts are taken into
account trough energy equation and motion of the
projectile as a coefficient .

Picture 2: Diagrams of relative burning surface for


different types of gunpowder grains.

9) Pressure of the gunpowder gases is equal in any part


of the volume behind projectile

In this paper, shape of the total progressive gunpowder


grain shape is determined. Results of the calculation and
comparison between results of gunpowder grain in use

Equations are, [1]:


1) Equation of energy balance:
244

OPTIMIZATIONOFTHESHAPEANDINHIBITEDSURFACESOFPROGRESSIVEGUNPOWDERPARTICLESINPURPOSEOFIMPROVING

pSc ( X + X ) = fb mb k 1 mv 2
2

, , , 1 , 1 - shape coefficient

(1)

2) Equation of projectiles motion:

m dv = pSc
dt

(2)

(3)

(4)

(5)

or
= 1 y (1 + 1 y )

(6)

= 1 + 2 y + 3 y 2

(7)

= 1 + 21 y

(8)

or

7) Reduced length of the free volume:

W
= 1
Sc Sc

W mb 1 m
)
b
0 b (

Sc

X
Rg
T

fb
k
m
v
uz 0

- density of the gunpowder


- coefficient of fiction which represents
relation between overall work of the
gunpowder gases spent on different
process and kinetic energy of the projectile

Equations, variables and constants are all explained in


reference [1]. These equations are used in order to
mathematically determine function of burning surface of
the modified gunpowder grain.

(9)

In equations there are several variables and constants and


they are:
- relative burned mass of the gunpowder

- reduced lenght of the free volume


X
mb
p

- volume of all particles of the gunpowder


gases per unit of mass
- volume of the chamber

During firing process, there are four periods of which first


tree are important for this paper. Starting period is period
of burning of the gunpowder inside permanent volume
until pressure reaches value p0 needed to overcome shear
strength of the projectiles rotating band. Next period is
period in which gunpowder burns while projectile is
accelerating down the barrel of the weapon, because of
the pressure of the gunpowder gases behind it. This period
is known as first period. Last period is the period when
the whole mass of the gunpowder is transformed into gas
state. This period is otherwise known as second period.
During this period projectile accelerates as a result of
pressure of the gunpowder gases behind it, [1].

6) Relative burning surface of the gunpowder:

X =

To solve the system of these equations using any method


of classic interior ballistic calculation, one variable is
claimed as independent variable, while other variables are
determined as a function of that independent variable.
Varying independent variable within predefined interval,
set of dependent variables can be solved. Independent
variable differs from one period of the firing process to
the other. System of equations of each period is connected
via starting and ending conditions of each period, [1].

5) Relative burnt mass of the gunpowder:


= y (1 + y + y 2 )

- free volume of the chamber

4) Burning speed of the gunpowder:


uz = uz 0 p

W0

3) Definition of projectiles speed:


v = dX
dt

OTEH2016

2.2. Burning surface of the gunpowder particle

Main condition is that the pressure is constant during burning


process of the gunpowder. If it is so, then using equation (2)
and Newtons mechanics, equation for two different
moments during burning process can be written as

- mass of the propellant


- pressure of the gases behind projectile
- area of cross section of the barrel

V2 2 V12 = 2

- traveled distance of the projectile


- gas constant

p Sc

(10)

Change in parameters during this transition using


equation (1) and (10) can be written as

- temperature of the gases


- specific work of the gunpowder gases

pSc ( X + X ) = fb mb pSc ( k 1) X

- adiabatic index

(11)

Using equation (9) and (11), and transforming equation


into differential form can be written as

- mass of the projectile


- velocity of the projectile
- universal burning speed of the gunpowder

d = C dX

245

(12)

OPTIMIZATIONOFTHESHAPEANDINHIBITEDSURFACESOFPROGRESSIVEGUNPOWDERPARTICLESINPURPOSEOFIMPROVING

OTEH2016

where C is constant which depends on several


parameters, C = C ( p, Sc , mb , fb ,...) .
Law of creation of the gunpowder gases presented trough
equation can be written as

d = Sz0 u (t ) p
dt Wz 0 z 0

(13)

where
Sz0
Wz 0

- burning surface of the gunpowder particle on the


beginning of the burning process
- Volume of the gunpowder particle on the
beginning of the burning process

Transforming equation (4) using


distance y , equation can be written as

relative

dy d rz (t) 1 drz (t ) u z
p
=
=
=
=
dt dt r0 r0 dt
r0 I k

Picture 3: Modified gunpowder grain made in CATIA

burnt

Dimensions of the gunpowder grain can be determined on


several ways. Easiest way to determine dimensions of the
gunpowder grain is by varying dimensions of the
gunpowder grain itself using program solution in one of
the program languages.

(14)

where

3. RESULTS AND ANALYSIS

rz

- distance traveled by flame

r0

- minimal dimension of the gunpowder grain

Ik

- impulse of the projectile

Determined dimensions are integrated in program solution


of classic interior ballistic method of calculation.
Calculation is done for 12.7mm M93 long range rifle and
results are compared with gunpowder grain in use which
is cylindrical seven-perforation gunpowder grain shape.
In calculation, both of the gunpowder grains used same
starting conditions except geometrical characteristics of
the gunpowder grains, which were different. Geometrical
parameters used in calculation can be seen in table 1.

Using equation (2), (10), (13) and (14) we can write

( x ) = C2 x

(15)

where x is relative burnt distance and can be written as


x = y y1

Table 1: Geometrical parameters of the gunpowder grains

(16)

where y1 is the relative burnt distance of which condition


of constant pressure during burning process is fulfilled.

Gunpowder grain in use

1.18890

-0.15890

Modified gunpowder grain

0.35129

1.84667

Program solution is written in program language Free Pascal.


Comparisons between results are given on the graphics
which are made in MATLAB program using results.

Assuming that relative burning surface of the gunpowder


grain is relation between burning surface of the
gunpowder grain and burning surface on the beginning of
the burning process, burning surface of the gunpowder
grain can be written as
S z ( x ) = C3 x

(17)

Equation shows that burning surface is linear function of


the burnt distance. It is clear that this function is
constantly increasing over time. There are several types of
the shape of the gunpowder grain which fulfill this
condition. All of them must use inhibited surfaces or in
other words surface that is not flammable. Simplest one is
the tubular shape grain with inhibited outer surfaces. This
means that gunpowder grain will have only one burning
surface and that will be inner surface of the gunpowder
grain as shown on the picture 3.
Figure 1: Pressure-time diagram

246

OPTIMIZATIONOFTHESHAPEANDINHIBITEDSURFACESOFPROGRESSIVEGUNPOWDERPARTICLESINPURPOSEOFIMPROVING

OTEH2016

Figure 1 shows that pressure of the modified gunpowder


grain is held on much lower values. Except that, pressure
is held for certain duration of the time during firing
process on constant value.
However, that phase does not last for long period of time.
It lasts roughly 40% of the time of the whole burning
process. Behaviour of the curve during first 60% of
burning process of the gunpowder, resulted in lower
muzzle velocity compared to the gunpowder charge
which is in use, as it was shown on the figure 2.

Figure 3: Pressure-traveled distance diagram

4. CONCLUSION
In this paper, gunpowder grain design is made based on
condition of constant pressure during burning process of
the gunpowder. Gunpowder grain was determined using
mathematical model based on classic interior ballistic
theory. Determined gunpowder grain showed interesting
results when it was implemented in classic interior
ballistic method of calculation. It is shown that modified
gunpowder grain designed could certainly help in
pressure reduction without losing too much of the
projectiles muzzle energy. Reason for lower muzzle
energy is that the burning surface of the modified
gunpowder grain is small at the beginning of burning
process, therefore creating smaller amount of gunpowder
gases compared to the gunpowder grains which are in
current use. It is possible to use same shape of the
gunpowder grains, but with different dimensions in any
other weapon system, regardless of the projectile type and
similar results can be expected.

Figure 2: Velocity-time diagram

Reason for that kind of behavior is that the gunpowder


burns slowly during first 60% of the burning therefore
creation of the gunpowder gases is not as high as it is with
gunpowder charge which is in current use. That happends
because modified gunpowder grain has small burning
surface on the begining of the burning process. Therefore,
high rise in pressure is not happening which resulted, at
the end of the firing process, in lower muzzle velocity
which is downside of the modified gunpowder grain.
Maximum pressure, on the other side, is about 60% lower
compared to the maximum pressure created by
gunpowder grains which are in use. This means that barrel
with currently used gunpowder charge is much more
loaded as a result of the bigger maximum pressure.
Because of this, barrel needs to be much ticker when
current gunpowder charge is used. This is not the case if
the modified gunpowder grains are used as composition
of gunpowder charge. Except this, pressure is held for
more than 50% of the barrels lenght on constant level
when modified gunpowder grains are used, as shown on
the figure 3. This could result in simpler barrel
construction.

Further studies of gunpowder charges consisted of


modified gunpowder grains should be done. Improvement
can be achieved by making combined charge for defined
system, [3]. Inserting another type of the gunpowder grain
which would be smaller compared to the modified
gunpowder grain used in that charge, but with bigger
burning surfaces on the beginning of the burning process,
for example progressive ones, would result in greater
pressure increase. Reason is that the burning surface will
increase more rapidly during initial phase of burning
process compared to the gunpowder charge consisted only
of modified gunpowder grains and therefore allowing
pressure to increase much faster. In this case, modified
gunpowder grain should be designed so that the burning
surface of the modified gunpowder grains, in the moment
when smaller gunpowder grains completely transforms
into gunpowder gases, fulfills condition of constant
pressure. It is very difficult both theoretically and
practically to achieve this, but roughly determined
gunpowder charge of this kind would result in improved
interior ballistic parameters. The gunpowder charge of

As dimensions depends on mass of the gunpowder


charge, it is clear that changing the mass of the
gunpowder charge, in order to have more efficient type of
firing process, it is necessary to change dimensions of the
gunpowder grains as well. Increasing mass of the
gunpowder charge without changing dimensions of the
grains would result in better interior ballistic parameters
compared to gunpowder charge in use.

247

OPTIMIZATIONOFTHESHAPEANDINHIBITEDSURFACESOFPROGRESSIVEGUNPOWDERPARTICLESINPURPOSEOFIMPROVING

OTEH2016

this configuration, if it could be created in this form,


should be considered primarily as a composition of
gunpowder charges for the kinetic energy rounds. Current
gunpowder charges of these projectiles features very high
maximum pressure therefore, modified combined
gunpowder charge could result in reduced pressure.

technology would fulfill demands for modern gunpowder


charges. Main problem is inhibiting surface of the
gunpowder grain, which in this case is whole outer
surface, while inner surface should remain intact.

Modified gunpowder grain is characterized with low


maximum pressure during firing process. This
characteristic affects several features of the weapon
system. Because of the low maximum pressure, barrel of
the weapon system can be thinner. If this type of the
gunpowder charge is implemented in artillery system, it
could help in reduction of the mass of whole system,
which is one of many demands for the modern artillery
systems, [6]. Maximum pressure influence the barrel life a
lot, therefore using modified gunpowder grains as a
gunpowder charge would definitely increase barrel life
because of lower maximum pressure, [3].

I would like to express sincere thanks to all professors of


department of the weapon systems of Military academy in
Belgrade for their unselfish help and suggestions in
present study.

ACKNOWLEDGEMENTS

References
[1] Cvetkovi,.: Unutranja balistika, Beograd, 1998.
[2] Tani,Lj.: Praktikum iz unutranje balistike,
Beograd, 2012.
[3] Neboja,H., Savi,S.: Modelovanje dvofaznog strujanja
u cevima orua sa kombinovanim punjenjem,
Vojnotehniki glasnik, 59(4) (2011) 158-173.
[4] Rao,K.P.,
Bartakke,A.S.,
Nair,R.G.K.:
Liquid
propellant for advanced gun ammunitions, Defence
Science Journal 1987, Vol 37, No 1, January 1987, pp
45-50.
[5] Tani,Lj.: Zbirka zadataka iz unutranje balistike,
Beograd, 1999.
[6] Risti,Z., Ili,S., Jerkovi,D.: Karakteristike i zahtevi
konstrukcija lakih artiljerijskih orua, OTEH 2007,
II Nauno-struni skup iz oblasti odbrambenih
tehnologija, VTI, Beograd, 2007.,ISBN 978-8681123-49-2.

It is important to mention that this paper is based on


theoretical approach of the problem and it was not tested
through experiment, therefore modified gunpowder grain
shape certainly will not behave during firing process in
practice like it was shown on the figures. Reason for that
are hypotheses used by classic interior ballistic model and
overall approach to the problem. Although, results created
by the model shown in the paper would not differ a lot
from the realistic ones, but experiments should be done.
Technology of making gunpowder grains as demanded by
this paper exists but, it is questionable if the quality of the
modified gunpowder grains created by existing

248

COMPOSITE SOLID PROPELLANTS WITH OCTOGENE


VESNA RODI
Military Technical Institute, Belgrade, springvesna63@gmail.com
MARICA BOGOSAVLJEVI
Military Technical Institute, Belgrade, marica.radusinovic@gmail.com
ALEKSANDAR MILOJKOVI
Military Technical Institute, Belgrade, zabackermit@gmail.com
SAA BRZI
Military Technical Institute, Belgrade, sasabrzic@gmail.com

Abstract: Composite solid propellants based on ammonium perchlorate/hydroxyterminated polybutadiene/isophorone


dyisocyanate including different contents of octogene (HMX) have been represented in this paper. The mass of HMX
increased in relation to oxidant, with constant bimodal fraction ratio. Combustion of the propellants has been improved by
adding of titanium (IV) oxide powder as stabilizer. Parameters of burning rate laws and apparent viscosity values were
determined and compared for the propellants of the same total solid phase.
Key words: Composite solid propellant, octogen, titanium (IV) oxide, ballistic bahaviour.

INTRODUCTION
Some efforts to increase solid motor performance have lead
to the use of high energy compounds, often pure explosives.
Application into cast composite propellants offers improved
specific impulse, translating into greater range for a rocket
system. Combustion designing can be used to tailor
propellants as to achieve desired, optimum, steady and nonsteady burning characteristics.
The purpose of this research is to inquire a combustion of
composite solid propellants which include bimodal ammonium
perchlorate powder and high energy cyclotetramethylenetetranitramine (octogene) crystals in a polybutadiene binder.
These formulations will be used to predict the parameters of
burning rate law, for a variety of propellants consisting of
various combinations of oxidizers crystals.

Commercial explosives must have an oxygen balance close


to zero to minimize the amount of toxic gases, CO, and
nitrous gases, which are evolved in the fumes.
Ammonium perchlorate (AP), Picture 1, is the most widely
used crystalline oxidizer for composite propellants. Unlike
alkali metal perchlorates, it has the advantage of being
completely convertible to gaseous reaction products.
Nitramines are high-energy-density materials that produce
high-temperature gaseous products. When some portion of
the AP particles in propellant is replaced with nitramine
particles, an AP-nitramine propellant is formulated [2].

THEORETICAL PART
The specific chemical composition depends on the desired
combustion characteristics. Different chemical ingredients
and their proportions lead to different physical and
mechanical properties, combustion characteristics, and
performance. The propulsion of a solid propellant depends
on many characteristics, followed by type of ingredients and
their concentrations.

Picture 1. Ammonium perchlorate powder


The AP particles first decompose at 250C in the subsurface region to form perchloric acid (HClO4), and the
polybutadiene binder decomposes to produce fuel in the
form of hydrocarbon fragments and hydrogen. HClO4
decomposes further to form smaller oxidizing species [3].
These decomposed gases consisting of fuel and oxidizer
components mix together to form a diffusion flame above
the propellant-burning surface according to following
chemical description:

All explosive materials contain oxygen, which is needed for


the explosive reaction to take place. The oxygen can be
introduced by chemical reactions or by mechanical
incorporation of materials containing bound oxygen. The
most important solid-state oxidizers are ammonium
perchlorate [1].
249

OTEH2016

COMPOSITESOLIDPROPELLANTSWITHOCTOGENE

4NH 4 ClO4 2Cl2 +3O2 +8H 2 O+2NO 2

Nitramine such as cyclotetramethylenetetranitramine


(HMX), Picture 2., is an important ingredient in propellants
used in solid rocket propulsion. The average molecular
weight of the combustion products of composite nitramine
propellants is lower than that of more conventional AP
based composite propellants. Consequently, the propellant
performance is greater than AP propellants at equivalent
flame temperatures since propellant performance is
inversely proportional to the molecular weight of the
exhaust products. Besides, propellants with HMX crystals,
shown in Picture 3, tend to have higher densities than AP
propellants and, thus, their density impulse is greater. Other
advantages gained by using nitramines in propellant
formulations include excellent thermal stability, low
propellant flame temperature and non-toxic, non-corrosive
combustion products. However, numerous problems have
been encountered by the substitution of HMX for AP in
propellant formulations. These include high burn rate
exponents, exponent shifts, low burning rate, and difficulty
in tailoring these low burning rates and high pressure
exponents. In addition, nitramine composite propellants are
much more difficult to ignite when compared to AP-based
composite propellants [4].

Picture 3. The particles of HMX


The tailorability of HMX, or other nitramine, - active binder
propellants is therefore more limited than for AP propellants
[4].
Anyway, in order to produce a usable propellant
formulation, it is necessary to control the burn rate of the
propellant, to prevent unacceptable performance (too high
or too low pressure) for the intended purpose of the device.
The equation linking the burning rate parameters is shown
in (1).
v=B pn

(1)

where:
v

burning rate,

pressure in the motor chambre,

const. depending on grain temperature and

pressure exponent.

The instability during the combustion has to be avoided


because of many undesirable consequences and can be
prevented by adding specific additives [5]. These
compounds may have an influence on parameters of all
characteristics, especially the ballistic performance and
achieving the necessary requirements for the propellant.
One of the refractory oxides, such as i2 are used in this
research because of their ability to effect on combustion
process, even forming plateau in the compositions of wide
distribution of AP particle sizes [6,7].

Picture 2. Octogene (HMX)


In contrast to propellants based on AP, nitramine
propellants do not produce hydrochloric acid (HCI) unless,
AP is incorporated into the propellant to serve as a ballistic
modifier. Besides being corrosive, HCl in the exhaust
provides nucleation sites for moisture droplets to condense
upon, thereby producing a visible contrail or secondary
smoke [2].
The burning rates of HMX propellants are usually much
lower than comparable propellants containing AP. HMX
inert binder propellants have burning rates of 1.3 to 5.1
mm/sec at 70 bar versus rates of 7.6 to 38 mm/sec for
similar inert binder propellants containing AP, because fine
AP particles will give higher burning rates. However, for
HMX propellants, variations in particle size have minimal
effect on changing the burning rate as can be seen by
comparing the burning rate range above with that of AP
propellants. In order to increase the burning rate, propellant
formulators have turned to additives such as metal catalysts,
but the pressure exponent was high and these rates are still
below that of AP based propellants. For minimum variation
in the thrust or chamber pressure, the pressure exponent and
the temperature coefficient should be small [4].

EXPERIMENTAL PART
The three main groups of compositions based on:
constant bimodal ratio of ammonium perchlorate and
constant polybutadiene/dyisocyanate NCO/OH ratio
have been prepared for this research:
I

the empty one i.e. referent formulation,

II

the unstabilized and

III

the stabilized formulations.

The content of HMX varied at two equal levels in stabilized


and unstabilized batches, and it is shown in Table 1.

250

OTEH2016

COMPOSITESOLIDPROPELLANTSWITHOCTOGENE

Table 1. The preparing compositions of propellant


AP-200
AP-7
HMX
i2
No
[mas.%]
[mas.%] [mas.%] [mas.%]
0
52.00
28.00
11

42.25

22.75

15.00

12

35.75

19.25

25.00

21

42.25

22.75

15.00

2.00

22

35.75

19.25

25.00

2.50

The propellant binder matrix was based on hydroxylterminated polybutadiene (HTPB) as prepolymer and
isophorone-diisocyanate (IPDI) as curing agent, with
addition of other standard components such as plasticizer,
bonding agent and antioxidant.
In all cases:
bimodal mixture ratio of AP was 65:35 (including the
average particle sizes of 200 m and 7 m);
parts of AP are exchanged with HMX;
used HMX was class 5 (< 125 m) [8];
total (energetic) solid phase (AP+HMX) was 80 mas.%;
From the previous chapter it is obvious that particle size
control is very important part of research. This especially
applies to AP powder, because of a great influence of
particle size distribution on burning rate law.

Picture 4b). Hammer mill ACM-10 - output housing


The average particle diameter value of grinded (so called
fine) fraction of AP is measured by Fischer subsieve sizer
(FPA), and the measuring results of two samples could be
seen in Picture 5. From the diagram it was obvious from
examining powder that the average size of those small
fractions is about 7 m.

The fine AP particles have been prepared by grinding of


entering powder, i.e. coarse 200 m fraction in Hammer
mill ACM-10, shown in Figs. 4 a) and 4 b), to very small,
nonspherical particles of AP.

24.0
24.0

20.0
20.0

1
2

diameter ( m)
diameter ( m)

16.0
16.0
12.0
12.0
8.0
8.0
4.0
4.0
0.0
0.0
0.35
0.35

0.45
0.45

0.55
0.65
0.55
0.65
porosity
porosity

0.75
0.75

0.85
0.85

Picture 5. Particle size values of two fine AP samples


delivered from FPA
All previously mentioned ingredients have been
homogenized at 60C in the laboratory vertical planetary
mixer, Picture 6., according to applied mixing sheet
(consists of premix, consecutive adding of bimodal AP and
curing agent at the end of process).

Picture 4a). Hammer mill ACM-10 - input housing

251

OTEH2016

COMPOSITESOLIDPROPELLANTSWITHOCTOGENE

Picture 6. Uncured composite AP/HMX propellant


Afterwards, the propellant is cast in PVC chamber of 2 inch
experimental motors, shown in Picture 7, used for the static
tests at fire station for solid rocket propellants.
Picture 9. Uncured propellant in the viscosymeter
double-wall glass sample holder
Propellant grains for static tests were cured at (702) C for
5 days and after cooling at ambient environment, the grains
were dearranged from cure tools. After laboration at testing
motors including appropriated nozzles, specimens are ready
for use.

RESULTS OF EXAMINATIONS
The results of viscosity measurements during prolonged
period of time are shown in Table 2.
Table 2. Viscosity values of the propellants during time
Picture 7. Uncured propellant before mandrel cast

No

The view of test table at fire station during the examination


is seen in Picture 8.

Viscosity (Pa s) vs. time (min)


15

30

45

60

75

90

97.6

102.4

129.6

147.2

184.0

216.0

11

104.0 128.0

160.0

184.0

202.2

12

134.4 164.8

195.2

228.8

248.4

21

176.0 236.0

280.0

312.0

352.0

22

264.0 368.0

419.2

494.4

541.4

384.2

The results of burning rate law parameters examinations at


20C are given in Table 3: burning rate values at 70 bar
(v70), pressure exponents (n) and constant (B).
Table 3. Burning rate law parameters at 20C

Picture 8. The view of static test example


Alongside the specimen casting for static tests, a small part
of uncured propellant was separated from the bowl for
measuring apparent viscosity values. From the very
beginning of cure time, every 15 minutes, the values are
read out from Brookfield HBT viscosymeter dial, at
(602)C, shown in Picture 9.

252

No

v70 (mm/s)

R2

6.67

0.2534

2.2743

0.9440

11

6.24

0.2490

2.1667

0.9902

12

5.62

0.2865

1.6648

0.9815

21

6.92

0.2353

2.5469

0.9493

22

6.37

0.2840

1.9062

0.9998

OTEH2016

COMPOSITESOLIDPROPELLANTSWITHOCTOGENE

cases incorporated 25 mas.% HMX and presence of


stabilizer. From literature is well known that the burning
rates of HMX propellants are usually much lower than
comparable propellants containing AP [4]. An explanation
for the small variation in burning rate at pressures less than
200 bar is that HMX composite propellants, although
heterogeneous in physical structure, burn more as a
homogeneous propellant. During burning, the crystalline
HMX, shown in Picture 3, melts (from 276 C to 286 C)
together with the polymeric binder on the propellant burning
surface and forms a chemically energetic liquid mixture.
Because of this melting, the combustion waves in the gas
phase are homogeneous.

DISCUSSION
These formulations represent the preliminary examinations
of exchange possibilities AP and explosives, such as HMX
in this case. The change of viscosity values at mixing
temperature and burning rate laws at ambient temperature
will be graphically presented for the sake of a more
convenient consideration and comparison of the
compositions.
First, it is very important to emphasize the pot life of
those propellants and it is very clear from Picture 10, based
on Table 2.

530
530

Viscosity, Pas
Viscosity, Pas

450
450
370
370

0
11
11
12
12
21
21
22
22

v (mm/s)
v (mm/s)

610
610

290
290
210
210

7
0

0
21
21
11
11

130
130
50
50
0

10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100

10
10

time (min)
time (min)

60
60

110

160
160

110
P (bar)
P (bar)

Picture 11. Burning rate laws for propellants with 15


mas.% HMX

Picture 10. Apparent viscosity changes at 60C


Firstly, it is obvious that entering the HMX instead AP has a
relatively small negative effect in relation to viscosity
values. The huge molecule of HMX affects the more
prominent steric hindrances than a smaller compound of AP.
Moreover, adding of some other solid ingredient leads to
significantly larger differences between those two with i2
and others. It is, obviously the consequence of increasing of
total solid phase, and probably of some kind of complex
compound presence. It is more visible in case of propellants
with huge HMX molecules.

v (mm/s)
v (mm/s)

For better conclussions, the parameters of burning rate law


are presented in two graphics: each shows the same HMX
content level in relation to the the empty one, referent,
including only AP (labeled as 0). In Picture 11 (15 mas.%
HMX) and 12 (25 mas.% HMX) it can be seen the
pressure/rate ratio of these propellants.

Propellant 21 (P 21) has higher burning rate than referent


one (P 0), as so as the other with the same content of HMX.
The difference between two propellants labeled as 11 and 21
is only the presence of 2 mas.% of stabilizer TiO2. At the
first sight it is not expected, because in AP composite solid
propellant systems, it comes a decrease of burning rate as
the consequence of pressure exponent fall due to TiO2 (in
the case of the same bimodal AP fraction) [6]. So, it has to
be the result of different chemical structure influence
between AP and HMX.

0
22
22
12
12

4
10
10

60
60

110

P (bar) 110
P (bar)

160
160

Picture 12. Burning rate laws for propellants with 25


mas.% HMX
An interesting literature data from observing the combustion
of HMX are the discontinuities in the burning rate
dependence on pressure, an abrupt change in the burning
rate at around 50 bar where the pressure exponent becomes
very large over a narrow pressure range and elevates the
burning rate by several orders of magnitude. One
explanation for this burning rate behavior states that the

On the other hand, further exchange AP with HMX, shown


in Picture 12 proceed to overturn the burning rate in both
253

OTEH2016

COMPOSITESOLIDPROPELLANTSWITHOCTOGENE

added heat transfer to the larger crystals causes the crystals


to crack due the added thermal stresses. The thermal
cracking increases the exposed crystal surface area. The
greater surface area increases the burning rate by allowing
even greater heat transfer into the solid. The above
explanation indicates that the slope breaks result from
structural changes in the HMX crystals, and not changes in
the actual burning rate pressure dependence of pure HMX.
In the follow up the authors of that work said that small
HMX particles do not show this behavior and have a much
smoother burning rate versus pressure curve [4], which is
exactly clear opened in our work.

higher burning rate than referent one, as the result of


stabilizer presence. The influence on AP composite solid
propellant systems is known as decrease of burning rate as
the consequence of pressure exponent fall due to TiO2. So, it
has to be the result of different chemical structure influence
between AP and HMX.
The fine particle size of HMX was smaller than 125 m, so
the appearance of crystal cracking is not really expressed.
After replacement with HMX the remaining AP had the
same ratio of bimodal mixture, which is the most favorable
mixture ration for those combination of particle sizes in the
point of combustion stability. The adding of HMX shows
more than acceptable values of pressure exponent, lower
than 0,3. It might even say that effect of titania compound
was not extremely significant, so that its presence is not
necessary for those contents of HMX, even according to
severe requirements.

Indeed, in this research the particle size of HMX was


smaller than 125 m which is the finest particles of HMX
for use. So the appearance of crystal cracking is not really
expressed. Except that, after replacement with HMX the
remaining AP had the same ratio of course and fine
particles, which is the most favorable mixture ration for
those particle sizes in the point of combustion stability view
[9]. So, the adding of HMX shows more than acceptable
values of pressure exponent (Table 3). It might even say that
effect of titania compound was not extremely significant, so
that its presence is not necessary. But it does not meen that
any other suplements would have the same role and
consequence, for example cupro (II) phtalocyanine, which is
intented for stabilization of nitramine propellants.

These formulations are the begin of development of


smokeless composite solid propellants. The principal benefit
of these types of propellants is their lack of visible
signature. This enables the firing position to remain
concealed and thus less vulnerable to hostile action.
Reference
[1]
Meyer, R. Khler, J. Homburg, A. Explosives.
Wiley-VCH, Verlag, GmbH & Co. KG aA, Fifth
Edition, 2002.
[2]
Kubota, N. Combustion of Composite Propellants:
Combustion of Composite Propellants, in
Propellants and Explosives, Thermochemical Aspects
of Combustion, Wiley-VCH Verlag GmbH & Co.
KGaA,
Weinheim,
Germany.
doi:
10.1002/9783527693481.ch7, 2015.
[3]
Shalini, C., Pragnesh, N. D.: Solid propellants:
AP/HTPBcomposite propellants, Review Department
of Chemistry, Krantiguru Shyamji Krishna Verma
Kachchh University, Gujarat, India, 2014.
[4]
Lengell, G. Duterque, J. Trubert, J.F. Combustion of
Solid Propellants, RTO/VKI Special Course on
Internal Aerodynamics in Solid Rocket Propulsion,
held in Rhode-Saint-Gense, Belgium, 27-31 May
2002, and published in RTO-EN-023.
[5]
Rodi, V. Dimi, M. Brzi, S. Gligorijevi, N. Cast
Composite Solid Propellants with Different
Combustion Stabilizers, Scientific Technical Review,
Belgrade, 2015., Vol. LXV, No.2.
[6]
Rodi, V. The effect of titanium (IV) oxide on
burning stability of composite solid propellants, 3th
symposium OTEH, Belgrade, 6-7.10.2011.
[7]
Rodi, V. ffect of titanium (IV) oxide on composite
solid propellant properties, Scientific Technical
Review, Belgrade, 2012., Vol. 62, No 3-4, pp 21-27.
[8]
SORS 7572/97 Brizantni (sekundarni) Eksplozivi.
Oktogen, Ciklotetrametilenetetranitramin
[9]
Rodi, V. Fidanovski, B. Burning Stability of
Composite Solid Propellants Including Zirconium
Carbide, Scienticif Technical Review, Beograd,
2013., Vol. 63, No 3, pp 33-40.

CONCLUSION
Examination from this paper represents the preliminary
research of exchange possibilities oxidizers and explosives,
through development of composite solid propellants based
on ammonium-perchlorate (AP) and formulations with two
levels of cyclotetramethylene-tetranitramine i.e octogene
(HMX). The binder was based on hydroxyl-terminated
polybutadiene prepolymer and isophorone-diisocyanate as
curing agent. The five compositions were prepared and in
two of them TiO2 was added as combustion stabilizer. 80
mas.% of bimodal AP mixture (200 m and 7 m in ratio
65/35) was the basic solid phase: the exchange with HMX
has been done at two levels 15 mas.% and 25 mas.%
whereby the rest of AP remains at the same ratio. Particles
of HMX was class 5 (< 125 m), and considered as fine
size. The fifth of the composition was the referent one
without additive.
The change of viscosity values at mixing temperature and
burning rate law parameters at 20 C are the characteristics
which have been discussed in this work.
Entering the HMX instead AP has a relatively small
negative effect in relation to viscosity values. The huge
molecule of HMX affects the more prominent steric
hindrances than a smaller compound of AP. Moreover,
adding of some other solid ingredient leads to significantly
larger differences among these compositions. It is obviously
the consequence of total solid phase increasing, and the
presence of some kind of complex compounds, which is
more visible with huge HMX molecules.
The combustion properties of propellants with HMX in
relation to the the empty one (referent, including only AP,
labeled as 0) shows that generally speaking burning rate is
decreasing by adding of HMX. Only P 21 has slightly
254

SOLVING TECHNICAL PROBLEMS WHILE WORKING WITH


ORDNANCE USING INNOVATION PRINCIPLES
OBRAD ABARKAPA
Faculty of Applied Management, Economics and Finance (Belgrade), Academy of Economy University (Novi Sad),
obrad.cabarkapa@gmail.com
DUAN RAJI
Innovation Centre of Faculty of Technology and Metallurgy, University of Belgrade, rajic.dusan1@gmail.com
ARIJA MARKOVI
Faculty of Technology and Metallurgy, University of Belgrade, marija.nightelf@gmail.com

Abstract: Ordnance represents a sub-system of weaponry and military equipment, which is being used daily by defense
and security forces in realization of the assigned tasks. Throughout the life span of ordnance- from its development and
construction, through production, storage, handling and use, to its retirement- certain technical problems occur and
need to be solved. This paper shows how to use 40 innovation principles, one of the TRIZ tools, in solving possible
problems while working with ordnance. TRIZ methodology is based on the axiom according to which the development
of all technical systems, ordnance included, takes place according to objective laws. Application of TRIZ innovation
principles in solving technical problems facilitates the unwinding of this technical- technological evolution.
Keywords: innovation principles, TRIZ, ammunition, ordnance, contradiction.
matrix as the "main tool". The inventive principles are
used to solve technical contradictions, but the question is
which of the 40 inventive principles to choose to solve a
specific problem. The answer to this contradiction gives
the matrix, which provides a choice of the most efficient
inventive principle that should be applied to a certain
problem in order to solve it.

1. INTRODUCTION
The early development of ordnance (munitions) has its
roots in ancient history. Nowadays, munitions are highly
sophisticated products, characterized by high precision
and high destructive power. Due to its complexity,
ordnance justifiably can be regarded as an independent
technical system (TS). In an effort to produce ordnance
with a greater destructive power and precision yet lower
levels of investment throughout its life cycle, it is
necessary to overcome a variety of problems. A scientific
methodology known as Theory of Solving Inventive
Tasks (TRIZ) [1,2] can be used to resolve them very
effectively. The aim of this paper is to illustrate one
possible way of applying TRIZ to solving problems that
arise when working with ordnance.

Any TS can be described with 39 parameters (eg. power,


voltage, pressure, reliability, moving object mass,
adaptability, complexity of devices, etc..), which are
contained in the contradiction matrix dimensions 39X39.
A schematic view of a part of the contradition matrix is
given in Table 1. In the left column of the matrix, there
are characteristics that need to be improved, and in the
first row of the matrix are the characteristics that are
deteriorating as a result of the characteristics being
improved. In the contradiction matrix in the section of
identified characteristics, there are certain numbers which
relate to the serial numbers of the inventive principles
proposed for implementation in order to resolve the
technical contradictions. Each of the listed principles
proposes undertaking certain activities for a possible
resolution of the problems described in the technical
contradiction. The analysis of the offered activities lead to
the conclusion what needs to be done in order to solve the
analized problem [1 - 4].

2. INVENTIVE PRINCIPLES AND


CONTRADICION MATRIX
The inventive principles are the most known TRIZ tool
for resolving technical contradictions. They are in fact
axioms, ie. statements whose truth does not require proof,
as it is generally accepted. The inventive principles are an
essential tool for researching actions that need to be taken
towards or within a TS in order to resolve certain
technical contradictions. The TRIZ methodology, in
addition to the inventive principles, uses the contradiction

255

OTEH2016

SOLVINGTECHNICALPROBLEMSWHILEWORKINGWITHORDNANCEUSINGINNOVATIONPRINCIPLES

Table 1. Schematic representation of the contradiction matrix

Characteristics which are being


improved

CHARACTERISTICS

DETERIORATING CHARACTERISTICS

INVENTIVE
PRINCIPLES

12

13

14

15

16

Segmentation

19, 3, 27

Extraction
Local quality

2
3

Asymmetry

Voltage (pressure)
Shape
Objects
13
construction stability
Firmness
14

35, 4, 15, 10 35, 33, 2, 40 9, 18, 3, 40

22, 1, 18, 4

17, 9, 15

10, 30, 35, 40

13, 17, 35

27, 3, 26

Consolidation

..

39

Capacity
(productivity)

11
12

33, 1, 18, 4 30, 14, 10, 40 14, 26, 9, 25

13, 27, 10, 35 39, 3, 35, 23

14, 10, 34, 40 35, 3, 22, 39 29, 28, 10, 18 35, 10, 2, 18 20, 10, 16, 38

Preventive counter40
action

Principle 4. Asymmetry

3. TRIZ PRINCIPLES' APPLICATION TO


ORDNANCE

Just like in the previous example, 9M14 missile


"Maljutka" could serve as a specific example of the
principle of asymmetry. Namely, to stabilize the flight of
this type of antitank missile, we use the folding wings that
are located on the body of the rocket, ie. within the
system of flight control of the rocket. During the flight,
the wings are unfolded, in a way so they take an inclined
position relative to the longitudinal axis of the missile and
thus stabilize the rocket flight to the target.

Each of the 40 TRIZ inventive principles has its possible


application when it comes to ordnance.
Principle 1. Segmentation

Within storage capacities, all ordnance is deployed in stocks.


Stocks are formed of the same caliber, model tools, types of
grains (mine) and the series of gunpowder. Such an
arrangement provides better visibility, control, easier access
and means manipulation, as well as gathering of related
ordnance, ventilation ("breathing") of the agents, etc.

Principle 5. Merge

Inside the crates for ammunition 7.62 mm usual grain


M67 for AP 7.62 mm M70 contains, depending on the
package, 1,120 or 1,260 pieces of identical bullets (same
caliber, same gunpowder series and rates). This helps
storing of the resources, as well as manipulating it
(handling, receiving, ammunition issuing) [8].

Principle 2. Separation (Extraction)

By storing ordnance, it has been noticed that there is a


possibility of its activation if kept together with initial
agents. Applying the separation principle when storing
ordnance, means keeping aside (in a separate warehouse)
initial agents from other munitions, which reduces the risk
of deliberate ordnance initiation to a minimum. The initial
agents are used for creating and transmitting the initial
impulse, in order to activate other types of ordnance. This
principle also applies to all munitions that have suffered
any kind of accidents (a fall from a height greater than 1
m, falling off a vehicles, munitions that have suffered fire,
explosion, etc.), when it is forbidden to keep these agents
in the same building with other ordnance.

Principle 6. Universality

Typically, a 7.62 mm bullet can be used for AP 7.62 mm


M70 regardless of the gun model (with a wooden or
folding stock, etc.). The same bullet is used with PM 7.62
mm M72 [9,10].
Principle 7. Insertion (Matroishka, Babushka)

Applying this principle applies to self-propelled missile


systems. Thus, for example. 122/128 mm TF M91 rocket
was created by modifying the 122 mm rocket, so the
modified rocket launchers can be used for both calibers.
Such modified missile goes through three centering rings
of 128 mm caliber and then enters the self-propelled
multiple rocket launcher (SVLR 128 mm M77) and is
ready for the launch. Self-propelled multiple rocket
launcher 128 mm M77 is located on the FAP 2026
vehicle, in a space reserved for cargo [11].

Principle 3. Local quality

RBR 90 mm M79 Osa (Wasp) can be taken as a


specific example. This agent consists of a couple of parts,
each having its own function. If only one of these
functions in the chain fails, there will be no activation of
rockets at the finish. So the initiation is performed
electrically, from the triggering mechanism via wire on
the pitcher and rocket container to the fuse located within
the nozzle of the rocket engine. Missile guidance is
performed by the contact of the centered surfaces on the
liner of the warhead and missile connectors. Next, the
rockets' flight is stabilized by means of folding wings. To
activate the cumulative explosive charge at the finish, it is
necessary that the piezoelectric shock fuse does not fail at
the finish.

Principle 8. Counterweight

Illuminating mine 60 mm M67 serves to illuminate the


battlefield. When the flame ignites the ejecting charge, it
further ignites the mixture, and illuminating flares. Due to
combustion products, pressure creates in the liner, which
leads to suppression of flares and parachute carriers.
256

SOLVINGTECHNICALPROBLEMSWHILEWORKINGWITHORDNANCEUSINGINNOVATIONPRINCIPLES

Furthermore, parachute slows the flare fall and the


maximum llumination time of the battlefield is enabled.

OTEH2016

Principle 14 Sphericity, Curvature

All modern hand grenades are generally cylindrical or


hemispherical (at the intersection of ellipses), allowing
easier handling and better grip in hand grenades. During
the bomb development, there were also parallelepiped
and prism-shaped ones (the famous prism-shaped "Vasic's
bomb"), but they were eventually thrown out of use
because of their impracticality.

Principle 9. Pre-strain

Very big problems concerning ordnance can arise years


after the end of a war, due to the activation of the
unexploded munitions, which poses a threat because it
leads to suffering of the innocent victims. To avoid this,
the pyrotechnic search and destruction of unexploded
munitions job posts are filled by professionals, engineers
and pyrotecnicians equipped with special protective
clothing, equipment and facilities necessary for the safe
implementation of the assigned tasks. In the event of
encountering the unexploded munitions and its eventual
accidental activation, the risk of life loss of professionals
at work is reduced.

Principle 15 Dynamism

During the design and construction of warehouses for


storage of munitions the munitions protection principles
must be respected to keep it safe from weather conditions,
surface and ground water, theft, sabotage, pest,
ammunition detonations in adjacent buildings and more.
Inside the warehouse temperature fluctuation on a daily
basis should not be greater than 5 C and relative
humidity should not exceed 75% etc.

Principle 10. Pre-change

One of the many problems with storing munitions refers


to their peacetime location, which, in the event of an
armed conflict, is the priority goal. To solve this problem,
first the pre-selection of the secret field locations for
storage is performed, and then, the dispersion is carried
out of all, or parts of the most important ordnance from
the peacetime warehouses. This procedure protects
ordnance from hostile attacks, because on the territory of
the former warehouses, only the empty warehouses
remain. An example of the application of this principle is
the action of a unit during the bombing of Serbia in 1999,
when, thanks to the resettlement of ordnance and other
technical systems, the significant amounts of technical
resources were saved.

Principle 16. Partial or extra required actions

The hand grenade - RB M75A can be taken as an example


of the application of this principle. Namely, in ideal
conditions, a bomb can be thrown up to 40 m maximum.
However, effective bomb action in diameter up to 25 m
was achieved by specific design of the bomb body,
whereby the core liner consists of more than 300 beads
diameter of about 1.5 mm, which are connected with
plastic mass. When triggering the explosive charge, the
beads are dispersed to a distance of 25 m, resulting in a
higher deadly effect.
Principle 17. Another dimension

Principle 11. Prevention

When using tools /weapons, all munitions that are fired or


launched, on their way to the target move on a parabolic
trajectory, rotating through the air and overcoming the
force of air resistance.

Warehouses for storing munitions must have their


windows painted white or matte, window shutters outside
must be covered with a metal sheet, and there should be a
metal grid on windows with a spacing of maximum of 2
cm between each other. The door must open outwards and
the it must be locked and double sealed. These measures
prevent the potential alienation of the assets.

Principle 18. Mechanical vibration

Palletised munitions facilitated to a great extent the


manipulation inside the warehouse during restocking,
loading or unloading. By applying the means of integral
transport, it is not necessary to engage the workforce
(people) nor auxiliary cranes.

Application of this principle is evident in the case of the


rocket 64 mm M80, which is an integral part of the
grenade launchers - RBR 64 mm M80, also known as
"Zolja", whose warhead has a piezogenerator (upper part)
and piezo electric fuse (lower part) built it. Upon hitting
the target, piezogenerator creates an electrical impulse
that is sufficient to activate the fuse, which then activates
the cumulative explosive charge and causes the desired
effect on the target.

Principle 13. Counter-effect

Principle 19. Periodic operation

When shooting with RB M75, the bomb's effect at the


finish exhibits itself after combustion of the retardation
mixture in the lighter (3 - 4s), after which the flame is
transferred to the primary and secondary filling. This way
a person who performs the shooting has enough time to
reach safety, but also, not athe amount of time that would
allow the safe escape of an enemy.

When firing from AP 7.62 mm M70 at a distance of 100


m using 7,62 mm ammunition M67 basic grain, the
principle 3-5-9-5 applies- a change of the shooting mode
(3 trial rounds, 5 rounds to be fired sindividually, 9
rounds for short bursts and 5 for single shooting under a
protective mask).

Principle 12. Equipotential

257

SOLVINGTECHNICALPROBLEMSWHILEWORKINGWITHORDNANCEUSINGINNOVATIONPRINCIPLES

OTEH2016

using partially-combustible and combustible cartridge


cases are reflected in higher speed shooting, increased
system energy, reduced weight of ammunition during
transportation and handling, lower cost compared to metal
cases and the like. The drive-structural material of
partially-combustible and combustible cases must be
completely burnt during the combustion of a gunpowder
charge.

Principle 20. Do not interrupt the useful operation

When using weapons, especially the shooting ones,


continuity in shooting is often required (ie. "machine-gun
fire"). The continuous shooting time (unless it comes to a
halt due to technical problems) is ensured in certain types
of shooting arms (eg. 7.62 mm M84 machine gun), by
previously placing ammunition in a bandolier. Bandolier
is drawn into the ammunition warehouse through the
"feeding mechanism" and the continuity of action is
provided during firing until the moment of firing the very
last bullet from the belt (bandolier).

Principle 38. Application of strong oxidants

To increase efficiency of thermite mixture it is necessary


to ensure the existence of flame.
For this purpose, oxidants (barium and potassium nitrate)
and fuel (sulfur binder in a higher percentage, etc.) are
added to the mixture.

Principle 22. Converting damages into benefit

This principle is applied in constructional problem


solving which relates to borrowing of propellant gases
resulting from the combustion of gunpowder in a case.
Combustion of gunpowder and the expanding propellant
gases, which by expanding perform the usuful work of
suppressing missile in the gun pipe, giving it the
necessary initial speed. Through the "loans", part of these
powder gases is used to perform additional useful actions
on weapon systems which releases the service of certain
"physical actions" and provides automation of assets
work.

Principle 40. The application of composite materials

Originally, the cases were made of steel, which, due to the


high cost, imposed the need to use them repeatedly,
which, further created new problems in their collection
and re-use. In accordance with this principle the modern
cases are made of brass (an alloy of zinc and copper) and
are protected against corrosion by phosphating or coating.
Composite materials are being increasingly used in the
development of armored bodies withing combat systems
as well as in personal protection equipment of members
of the security forces.

Principle 23. Feedback loop

In a manner described in principle 22, the borrowed


gunpowder gases are used to execute a series of actions,
such as ejecting of empty cases, moving the feedback
mechanism backwards, adding of a new bullet in a gun
barrel, "locking tubes" and others.

4. CONCLUSION
During the life cycle of ordnance, a series of problems
arises, many of which are technical in nature. TRIZ
methodology is proposed in order to make daily work
with ordnance more efficient in solving the encountered
technical problems.

Principle 26. Copying

For the purpose of training the members of the security


forces, it is more economical to use ordnance for
secondary (auxiliary) purposes (military exercises and
school ordnance), which represent an "excellent copy" of
combat munitions, thereby ensuring maximum safe stuff
training.

The practical application of one of the inventive


principles during the bombing of Serbia in 1999 (principle
10 "Pre-change"), has shown that the existing working
system with ordnance functions well. By applying this
principle, a large number of ordnance warehouses have
been moved to the reserve positions in order to preserve
the system. This shows a high compatibility that exists
between the logic in applying the TRIZ principles and
successful solutions to problems with ordnance applied in
practice.

Principle 30 Use of flexible shells and thin films

This principle finds its application in the procedure for


packaging shooting arms ammunition. The packaging for
the shooting ammunition consists of several packages:
firstly, the ammunition is packed in a large amount of
cardboard boxes, then the cardboard boxes are neatly
stacked in a tin box, and in the end the tin is placed in a
wooden crate. This provides easier and safer handling of
ammunition, and at the same time it can "breathe" and be
protected from harmful external influences.

Regardless of the complexity of the system such as


ordnance, each of the 40 existing TRIZ inventive
principles has a potential application in solving new
problems. This paper vividly explains possible application
of each of those inventive principles.

ACKNOWLEDGMENTS

Principle 34. Rejection and regeneration of parts

Serbian Ministry of Education and Science support this


work, grant no. TR34034 (2011-2016).

The development of high performance artillery is


significantly associated with the efficiency of
ammunition. Partially-combustible and combustible
cartridge case and its tactical and technical advantages
over conventional metal case contributes to greater
possibilities of combat weapons. Some of the benefits of

References
[1] D. Raji, Z. Kamberovi, B. akula, Kreativni
inenjering, Inovacioni Centar TMF, Beograd 2016.

258

SOLVINGTECHNICALPROBLEMSWHILEWORKINGWITHORDNANCEUSINGINNOVATIONPRINCIPLES

[2] D. Raji, B. akula i V. Jovanovi, Uvod u TRIZ ili


kako postati kreativan u tehnici, Beograd, 2006, SIG,
www.triz-journal.com Part of the Real Innovation
Network, Accessed on: 2016-05-11
[3] J. Fresner, J. Jantschgi, S. Birkel, J. Barnthaler, C.
Krenn, The Theory of inventive problem solving
(TRIZ) as option generation tool within cleaner
production projects, Journal of Cleaner Production
18 (2010) 128-136;
[4] O. abarkapa, Zatita poverljivih inovacija, 2010,
Redakcija Vojna knjiga, Beograd.
[5] F. I. Kubota, L. C. Rosa, Identification and
conception of cleaner production opportunities with
the Theory of Inventive Problem Solving, Journal of
Cleaner Production 47 (2013) 199-210;
[6] D. Raji, A. Samolov, M. Vitorovi-Todorovi, N.

OTEH2016

Paji, Solving Ecological Problems in the Field of


Defence Technologies, 4th OTEH, Defensive
Technologies, Beograd, 2011;
[7] O. abarkapa, D. Petrovi, Primena patentne
dokumentacije
pri
projektovanju
sredstava
naoruanja i vojne opreme, Scientific Tehnical
Review, Vojnotehniki institut, Beograd 2014.
[8] Uputstvo za rad skladita UbS, Generaltab VJ, TUV, 5105, Beograd, 2002.;
[9] Tehniko uputstvo, Municija, deo 1., knjiga 1., TSl1./3., M715, SSNO, Beograd 1974.;
[10] Tehniko uputstvo, Municija, deo 2., knjiga 1., TSl1./1., SSNO, Beograd 1974.;
[11] Grupa autora: Osnovi konstrukcije atiljerijskog
naoruanja, UA-223, VIZ, Beograd, 1983.

259

APPLYING OF NANOTECHNOLOGY IN PRODUCTION


OF RIFLE AMMUNITION
MIHAILO EREVI
Prvi Partisan AD, Uzice , mihailo.ercevic@prvipartizan.com
VELJKO PETROVI
Department for Defence Technologies SMR MO, Beograd, veljko.petrovic@mod.gov.rs
BRANKA LUKOVI
Department for Defence Technologies SMR MO, Beograd, branka.lukovic@mod.gov.rs

Abstract: This work will include analysis of the application of nanoparticles in the manufacturing of rifle ammunition.
The results shows real application of nano particles of tungsten-disulfide on the working tools for making cartridge
cases and bullets, control tools, test barrels and subsystems for testing internal ballistics. The goal of applying nano
particles indicated the possibilities of increasing production resources, increasing tactical and technical requirements
of ammunition and a substantial reduction of the required funds for production starting from the bullets, contor until
verfication tests and final use.
Key wards: tungsten-disulfide, nano particles, working tools, bullet, case.
manufacturing and control processes and components
involved in these processes as follows:

1. INTRODUCTION

Control and work tools being used to manufacture of


bullets and cases;
Test barrels and test barrels for testing weapons for
interior and exterior ballistics and terminal ballistics;
Bullets and cases (different calibers and types of
ammunition).

According to time and develop of technologies in XXI


century, people are talking more about applying of nano
particles in different spheres of industry and life.
Nanoparticles and nanotechnologies are becoming
technologies that only the strongest countries with big
science centers can use, but it also takes place in countries
that are ambitious to be active and more successful in
science-technological race. One of that countries is
Serbia, which used nanotechnology not only in food,
pharmaceutical and other aspects of the industry, but also
companies from defense industry. In this industry, there is
huge assortment of products, analysis of the application of
nanoparticles is comprehensive and will give exact result
of application in production of artillery weapons, armored
weapons and ammunition. Company Prvi partizan A.D. is
active in implementation of nanotechnology as the holder
in production of rifle ammunition.

The decision to carry out tests on the above-mentioned


areas is resulting from the comprehensive review of the
manufacturing process, the certain problem of production
and the need for a concrete application of nanotehnology
in order to increase manufacturing capabilities of the
factory and the reduction of financial resources needed for
production.

2. NANO PARTICLES OF TUNGSTEN DISULFIDE


Chemical designation for tungsten disulfide is WS2. It
occurs naturally in mineral tungstenit. Nano structure of
tungsten disulfide was first discovered in 1992. Since the
structure of nanoparticles of tungsten-disulfide fleuron
type, these particles have excellent morphological and
mechanical characteristics, which puts them in one of the
top layers of lubrication. Thanks to nano dimensions of 2200 nm, these particles are going through the finest
irregular surfaces of metals and other materials which
provides a unique film and lubricant layer that provides
low friction, reducing wear and increasing the working
life of surfaces treated with these particles.

In this work, we will present what are nanoparticles,


specific nanoparticles tungsten-disulfide, their mechanical
and chemical characteristics. We presented several types
of nanoparticles, depending on which kind of applications
are applied, with the main emphasis on nanoparticles that
are selected for use in the production of rifle ammunition.
Main advantages and disadvantages of these go through
the real results obtained from testing of various
components in Prvi partizan.
In order to examine the application of nanotechnology in
Prvi partizan, we decided to carry out testing in several
260

OTEH2016

APPLYINGOFNANOTECHNOLOGYINPRODUCTIONOFRIFLEAMMUNITION

Great antifriction characteristics;

The structure of these particles is shown on following


picture:

Great ant fraying characteristics;


Application on almost all types of materials;
Easy maintenance;
conomical applying;
The hydrophobic;
Easy to apply nanoparticles on the surface that we want
to protect. [1].
Because of these characteristics of nanoparticles, they
found application in various companies of defense
industry and in various parts starting from NBC protective
suits, gas masks, ammunition, detonators, tank barrels,
composite materials for the production of aircrafts, the
application in rocket engines. Prvi partizan recognized the
importance of application of these particles and actively
started to test them.

3. APPLICATION OF NANOPARTICLES IN
PRVI PARTIZAN A.D.
Considering the current assortment of products and
existing problems in the production and analysis of where
the largest fraying occurs, the largest friction, the highest
temperature in the manufacturing process and where we
can save the most on the expenditure of material, and with
the overall purpose of applying nano technology in the
sense that we will reduce expenses, we decided to apply
nanoparticles to the three different components:
Control and working tools;
Test barrels for testing interior, exterior and terminal
ballistics;
Bullets and cases for different calibers of ammunition.
Picture 1. The structure of nanoparticle of tungstendisulfide [1]

Control and working tools


For the acceptance of nanotechnology on control tools,
we choose to test control tools on machine control of case
dimensions, because analyses showed that these control
tools are frying the most and friction is the largest at this
material, because the way of measuring is automatic and
case passes through a lot of gauges. When we account the
representation of calibers in production, we chose caliber
5.56 x 45 mm, as the reliable indicator of the results. The
main aim of the application of nanoparticles on this
control tool is the possibility to extend the service life,
which is currently around 240 working hours, into
minimum 300 hours, after which there is a loss of defined
measures and tools will be took off. We chose control tool
for machine control of dimensions of case:

Of the most important characteristics of these particles,


we should note that the range of their diameter is 2-100
nm, depending on the type of application. Resistance to
high temperature is exceptional and exceed 1250C.
Chemical resistance of the nanoparticle goes to the limit
of 13 Ph. Thanks to its structure of fleuron, nanoparticles
possess outstanding mechanical properties and can
withstand high levels of load and pressure. The values to
which this resistance can go is still the subject of tests and
we cant be sure about the exact value, but it is equal to
the mechanical parameters of the material coated with
nanoparticles. By comparing layers of nano particles of
tungsten-disulfide and other lubrication, we can say that
the main advantages of tungsten-disulfide are the
following characteristics:

Machine control of diameter of case rim;

A higher degree of reliability;

Machine control for diameter of primer pocket;

More efficiency;

Machine control for diameter case mouth.

Longer service life of parts that are applied these nano


particles on;

Beyond to the above mentioned gauges, we analyzed


controllers to measure the depth and length of primer
pocket and depth of powder chamber, but the percentage
of spending is less, so it was no longer subject to further
testing.

Wide temperature range of application;


Great anticorrosion characteristics;
261

APPLYINGOFNANOTECHNOLOGYINPRODUCTIONOFRIFLEAMMUNITION

OTEH2016

On the following pictures, you can see measuring spots on


case an gauges for them:

Picture 3. Puncher with final drawing of case


5,56 x 45 mm
After the first pass through the working tools matrix,
PVD coating and nano layer were took off and the exact
reason for this wasnt determined. The assumption is that
voflram-disulfide is not compatible with titanium nitride,
but tests that proves that, will be done. The same case was
with matrix for forging steel core but it wasnt coated
with a coating of VAT, however, that was probably one of
the reasons of a large load. Since the initial tests with
nanoparticles applied on working tools failed, we cant be
sure in the results, but we can say that there is some
aversion of titanium-nitrite and tungsten-disulfide in the
case with the punch, in the case with the die, the reason of
forging wasnt determined.
Test barrels and barrels of weapons for testing
For the purposes of the examination of test barrels, we
chose a test barrels for speed, caliber 5.56 x 45 mm. The
optimum lifetime of this barrel should be about 2000
shooting, average, to be profitable, then it starts to lose
original dimensions of barrel caliber and barrel is off from
use. The average work life in exploitation is about 25003000 shots. If we take the average prices of these barrels,
which is around $ 600, it is obvious that any improvement
of barrels and longer work life is affecting the financial
parameters.

Picture 2. Review of measuring spots on case an gauges


for them.
Control tools that are mentioned are made with the
highest quality workmanship and low tolerances and it is
due to the need to application of nanoparticles of tungsten
disulfide used the finest layer 2 -5 nm in order not to
threaten the nominal value of the tools and moved to the
impermissible tolerance. After the treatment, control tools
are put into a regular process of control and their work
was controlled. After the first occurrence of deviation of
tools measures, they were took off and results were
summarized. There has been an extraordinary extension
of the working life of tools for 45 days or 360 working
hours, and if we look at the percentage, tool life is
extended to 50%, which will largely mean the savings in
spending control tools, as well as financial savings.

For better application of layer of nanoparticles, barrel was


previously degreased and purification and then put into
heat treatment, heated up for one hour at 50oC. After that
was applied layer of nano particles and put into test
process, together with regular untreated barrels. We
measured mean value on speed at 25 meters from the
mouth of the barrel and precision of bullet at 100 meters
from the muzzle. Scale accuracy was greatest EC, which
represented a distance between the centers of two
mutually most distant goal.

In need for testing nano particles of tungsten-disulfide as


a protective layer on the working tools, we started with
the same parameters as for control tools, we decided to
test working tools for third drawing of case 5.56 x 45 mm,
as well as dies for forging a steel core for bullet SS109
5.56 mm. Working tools and dies are treated with a thin
layer of 2nm to stay in the values of tolerance. It is
important to note that working tools are made of steel and
coated with PVD coating of titanium-nitrite, which is
used as a protection against corrosion and wear. After
treatment with nano particles, the treated components are
put into regular work. On the following picture, there is
presented one of punchers:

To summarize the results, we took values of speed,


accuracy and pressure for ballistics and wear of barrel.
Speed values were lower by an average of 20 m/s, values
of pressure were lower to 200 bar, while the accuracy was
within proper military MIL standard for caliber 5.56 mm.
From the results we can conclude that a test barrels
treated with nano coating reduces pressure and speed,
which may increase the work life of barrels, but it is
negative in terms of achieving tactical-technical
requirements for bullet in terms of achieving the set speed
and other negative consequence directly linked with a
reduction of speed, which is compensating mass of
powder for aim of achieving given speed.

262

APPLYINGOFNANOTECHNOLOGYINPRODUCTIONOFRIFLEAMMUNITION

OTEH2016

This has a negative impact on production costs. After


1000 shootings, the caliber and wear of the barrel were
controlled and it was found that the wear was slightly less
than that of the untreated barrel, and it can be concluded
that nano particles reduce wear in the barrel in an average
of up to 5-10%. When we take into account that positive
characteristic and the negative that we have the reduction
of the speed of the bullet, the profound analysis of the
application of nano particles on the internal alignment of
test barrels stems. In terms of cost of procurement and
development of the barrels there is a progress, and on the
other hand a setback in terms of increasing the costs of
laboration of ammunition.
To test the application of nano particles to test weapons it has
been selected to test the possibility of reducing the leadening
of guns and revolvers barrels. For the purpose of this testing,
lead bullets LRN 10.2 grams of the caliber .38 (.3575) were
previously tested. The bullets are treated with an ultra thin
layer of nano particles and the coating did not affect the
change in the caliber of the bullet. For the purposes of the
experiment two guns were used:

Figure 5. The illustration of the remaining lead shavings


after the passage of the regular and treated bullet through
the barrel after the first shooting

Taurus special 38, the length of the barrel 4 inches;


Ruger 38 special, the length of the barrel 2 inches.
The reason for taking different barrel lengths is to directly
see if the barrel length affects the amount of remaining
lead in barrels although the bullets were treated with a
nano layer. The aforementioned revolvers were predegreased, cleaned and prepared for the experiment 5 and
6 bullets were shot into three groups from both revolvers,
and between each shooting the revolver barrels were
cleaned. Also, the bullet chambers in both revolvers were
cleaned. The following figure illustrates both leadened
and non-leadened barrels:

The quantity and dimensions of shavings after the second


shooting:

Figure 4. The illustration of leadened and non-leadened


barrels

Figure 6. The illustration of the remaining lead shavings


after the passage of the regular and treated bullet through
the barrel after the second shooting

After the first shooting, shavings from the barrel appeared


in both treated and regular bullets provided that the
leadening of the barrel is highly reduced with the treated
bullet, which can be seen from the attached figure:

The quantity and dimensions of shavings after the third


shooting:

263

OTEH2016

APPLYINGOFNANOTECHNOLOGYINPRODUCTIONOFRIFLEAMMUNITION

Table 1. Pulling force of a regular bullet and a bullet with


a nano layer
Pulling force [N]
Article
420-620
Regular bullet
420-630
Treated bullet
The results of bullet mean velocity and maximum
deviation of bullet velocity measured at 25 meters from
the muzzle:
Table 2. The values of bullet mean velocity and its max
deviation
Standard Treated
Regular
bullet
bullet
bullet
Velocity [m/s]
369
363
371
Mean deviation of
6.46
4.42
7.71
velocity [m/s]
The results of accuracy measured at 100 meters from the
muzzle from two different barrels. As accuracy criteria
three variants were taken:
Rs (medium radial diversion of bullets);
s (the distance between the centers of the two most
distant hits);

Figure 7. The illustration of the remaining lead shavings


after the third shooting

H+L (the sum of the greatest width and height of the


hits).

After the first shooting the leadening of the barrel is


evident with both treated and untreated bullets, except
that there are large differences in the size and quantity of
particles remaining in the revolver barrel. When we
compare these lead particles it can be concluded that the
bullets treated with tungsten disulfide after passing
through the barrel leave much less particles and shavings
that remain are smaller. After the second shooting, this
difference is more obvious, while with the third shooting
it can be seen that the level of other particles in the barrel
is significantly lower than in the first two shootings.
Visual control of the barrel after shooting found that on
the grooves of the barrel there were deposits of lead,
while the control of the revolver barrel after shooting the
bullets treated with a nano layer did not observe this
phenomenon. There were no observed differences in the
amount of lead particles that remain due to the length of
the revolver barrel. The same amount of deposits was
observed with both 4-inch and 2-inch barrel

Table 3. The results of accuracy of FMJ bullet in caliber


9 x 19 mm.
Standard Treated bullet Regular bullet
BARREL
1.
2.
1.
2.
1.
2.
Rs [cm]
2.85 2.20 1.99 1.72 2.39 1.27
s [cm]
10.5
7
6.7
5.4
8
5.4
H+L [cm] 17.2 10.1 11.7
8.9
11
7.3
To test the values of pressure it was chosen to examine
the values by applying crasher test tube and piezo test
tube. The values of mean maximum pressure were
measured. In addition to the pressures values, the values
of the bullet velocity measured at 25 meters from the
muzzle were also measured.
Table 4. The results of the measurement of mean
maximum pressure and velocity on a crasher tube.
Regular
Standard Treated bullet
bullet
Mean velocity
360.57
351.34
354.83
[m/s]
Mean pressure
2337.2
2576
2561
[bar]

Bullets and cases


For the purposes of nano layer application on the bullets,
the treatment of bullets in caliber 9 x 19 mm was selected,
in order to increase the performance: better accuracy and
pressure reduction. Bullets were, prior to the application
of a nano layer, degreased and cleaned and a thin layer of
tungsten disulfide was applied in order to comply with the
specified tolerance of bullet caliber. For the evaluation of
the test results, the values of standard ammunition were
taken - the force of pulling the bullet from the case, bullet
mean velocity measured at 12.5 meters from the muzzle
and the mean maximum pressure, as well as the accuracy
of the bullet, measured at 50 meters from the muzzle.

Table 5. The results of measurement of mean maximum


pressure and velocity on a piezo barrel.
Regular
Standard Treated bullet
bullet
Mean velocity
367
371.52
373.51
[m/s]
Mean pressure
2046
2219
2175
[bar]
If we summarize the results we can see that when it
comes to pulling force we do not have changes. Smaller
pulling force and greater impact of nano particles were

The results of pulling force are presented in the following


table:

264

OTEH2016

APPLYINGOFNANOTECHNOLOGYINPRODUCTIONOFRIFLEAMMUNITION

expected with regular bullets but it did not happen. The


positive thing is that we did not get greater pulling force
than envisaged, so we can say that nano particles maintain
the same level of pulling force on the bullets of regular
production.

The application of nano particles at control tools gave


good results, achieved an increase in the service life of
control tools by 30% of the average, and there it can be
started with the commercialization of nano particles
applications.

When we look at the results of the velocity the attached


shows that the mean velocity of treated bullets is the
lowest but slightly but we got an increase in the bullet
velocity deviation which is very important for the
accuracy of bullets because we have a more uniform
movement and flight of bullets and we will get a more
accurate arrangement of bullets hits. As for the velocity of
bullets, by observing crasher and piezo tubes, we again
obtain velocity values that are slightly less than the
regular values but it does not endanger the requirements
of bullets.

The application of nano particles on the working tools has


not produced good results yet, but it should be noted that
the tools that were used for testing were intended
exclusively for plastic pulling of material and used with a
coat of PVD coating. For further testing it is planned to
use tools that will not be protected by any means other
than a protective nano layer of tungsten disulphide.
The application of nano particles on the leadening of
barrels of guns and revolvers has given excellent results.
It is obvious that a nano layer affects a significant
reduction of lead remaining in the barrel in terms of the
size of particles that remain as well as their quantity.
Given that the amount of lead that remains in arms
directly affects the life of the same, that is the life of the
weapon barrel, we can say that by using lead bullets that
were treated by nano coating we significantly increase the
service life of weapons, thus significantly affecting the
financial view of the procurement of the same, and also
for the end customer who is to get solid ammunition that
will directly extend the service life of the weapon.

From the performed tests, the most was expected from the
test of pressure on crasher and piezo tubes. Somewhat
unexpectedly we obtained the highest pressures precisely
on treated bullets although it was assumed that a layer of
nano particles will affect the reduction of wear, the
reduction of the temperature and eventually the pressure,
but it did not happen. The same case was obtained with
tests on a piezo tube. It should be noted that this type of
testing was done only on gun ammunition in caliber 9 x
19 mm, and it is planned in cooperation with the Military
Technical Institute in Belgrade to do tests with sniper
ammunition of caliber 338 Lapua Magnum, with FMJ
BT, HP BT and monolithic bullets.

The application of nano particles on the bullets from the


classic production, in this case gun bullets of caliber 9 x
19mm, to review the results of pressures obtained from
the crasher and piezo tube has not given the results.
Moreover, we got an increase in pressure values in
percentages of up to 10% maximum. In order to
determine these results, new tests will be conducted but
on the sniper ammunition of caliber .338 Lapua Magnum
in cooperation with the Military Technical Institute.

In the end of summarizing the results, a measure of


accuracy remains. We used two barrels and took three
different levels of accuracy measurement as a reference.
When we compare all the results, the mean values, it can
be seen that the treated bullets provide by far the best
values for all three criteria of accuracy.

LITERATURE

4. CONCLUSION

Jaksic,G.: Nanotechnologies, Speed up International,


Belgrade, 2016.
Erevi,M., Spaseni,S., enadi,Z.: Technical report 121/310/2016, Prvi Partizan AD, Uzice, 2016.

The conducted tests have given some indications of


applying nano particles on ammunition. So far, this study
was only an introduction to a new which will be more
detailed and implemented in cooperation with the Military
Technical Institute in Belgrade.

265

DETERMINATION OF COMPATIBILITY OF DOUBLE BASE


PROPELLANT WITH POLYMER MATERIALS USING DIFFERENT
TEST METHODS
MIRJANA DIMI
Military Technical Institute, Belgrade, mirjanadimicjevtic@gmail.com
BOJANA FIDANOVSKI
Military Technical Institute, Belgrade, b.fidanovski@gmail.com
LJILJANA JELISAVAC
Military Technical Institute, Belgrade, jelisavach@yahoo.com
SLAVIA STOJILJKOVI
Technical Overhaul Works, Kragujevac, trzk@trzk.co.rs
NATAA KARIIK
Military Technical Institute, Belgrade, natasa.karisik@gmail.com

Abstract: Compatibility of double base propellant NGB-051 with two types of polymer materials (Nylon 12 and
Polymethylmetacrylate) was determined. Testing was performed using heat flow calorimetry, differential scanning
calorimetry, the method of chemical analysis after aging and vacuum stability test method. The heat flow curves of
propellant, polymeric materials and their mixtures, and the theoretical curves were determined. Produced energy is
calculated and the values of relative and absolute compatibility were determined. Analysis of the exothermic peak of
decomposition of propellant and its mixture with polymer materials was performed and the maximum difference in peak
temperatures was calculated. The stabilizer content of the unheated propellant, the artificially aged propellant and the
propellant after heating in contact with the polymer material was determined. The value of the volume of released gas
to the propellant and polymer materials as well as mixtures thereof was determined. The value of absolute compatibility
was calculated. Compatibility was done on the basis of the results presented.
Key words: compatibility, heat flow calorimetry, differential scanning calorimetry, the method of chemical analysis,
vacuum stability test method, double base propellant, polymer materials.

1. INTRODUCTION

The most important effects/phenomena as produced by


chemical incompatibility reactions between explosive and
contact material are depicted in Figure 1.

Explosives, propellants and pyrotechnic mixtures


(explosives or energetic materials) usually come into
contact with a large number of materials such as plastics,
adhesives, waxes and metal, either by direct contact or
through the environment within the ammunition. For this
reason the compatibility of energetic materials with other
components used in ammunition is extremely having in
mind the high demands made on their safety and
functioning. The ideal case of compatibility would be that
the materials do not react with each other even after long
storage periods at various conditions. For practical
reasons, materials are judged compatible if during and
after a specified storage period the functioning and safety
of the components are still acceptable [1].

Chemical reactions between explosive and contact


material can, on the explosive, increase the rate of binder
degradation, stabilizer depletion, heat and gas production
and weight loss. Furthermore, the sensitivity of the
explosive can be increased. On the other hand, also
chemical reactions of the contact material can be initiated,
such as post curing and decomposition of binders and
corrosion processes in container materials.
From the analysis of numerous compatibility tests, it was
concluded that contact materials which are found to be
incompatible with one class of organic explosives very
often are also incompatible with the other classes [2].

In contrast to the chemical ageing reactions,


incompatibility of energetic materials is much less
investigated.

Propellants are thermodynamically unstable high-energy


materials, whose functional and safety features changes
266

DETERMINATIONOFCOMPATIBILITYOFDOUBLEBASEPROPELLANTWITHPOLYMERMATERIALSUSINGDIFFERENTTESTMETHODS

during aging. Most of them are subject to slight chemical


decomposition already at room temperature, this process
entails a number of mechanisms and chemical
decomposition reactions, many of which are self
accelerated. Aging can be accelerated by incompatible
reactions between the propellants and contact materials or
even the very constituents.

OTEH2016

and mass measurement results are obtained in a short


period of time [7-9]. However, these methods are suitable
for testing explosives and pyrotechnic compositions. On
the other hand, a small mass of sample may be a
drawback, due to samples nonhomogeneity. A study of
the compatibility of thermal methods, e.g. DSC method,
is based on monitoring changes in the melting
temperature, or when it comes to two polymer materials,
glass transition temperature, as well as the kinetic
parameters of the activation energy [10].
For the evaluation of test results, different approaches are
in use, such as absolute incompatibility (Qr = M Mcalc)
and relative incompatibility (D = M/Mcalc). Quantity M
is the specific property (e.g. heat release, evolved gas
volume, stabilizer depletion) as measured for the mixture,
and Mcalc the same property calculated for the mixture by
linear combining the measured values of isolated
explosive and contact material. Quantity M itself can be
used as third criterion (stability of the mixture)
considering that, if two materials are regarded as
compatible, their mixture has to be chemically stable as
well and therefore must fulfill the stability test
requirements of the respective explosive.

Figure 1. Phenomenological description of


incompatibility
The most reliable way to investigate compatibility is to use
a variety of techniques to investigate chemical and physical
reactions and to perform ageing experiments as close to
storage conditions as possible. In most cases this is very
time-consuming. In practice, one expects reliable results
from a compatibility investigation in a short time. To do
this, some tests based on accelerated ageing at higher
temperatures are available, measuring gas evolution
(vacuum stability test), heat effects (heat flow calorimeter,
differential scanning calorimeter (DSC)), weight loss
(thermogravimetry (TGA)) or stabilizer loss [1].

3. EXPERIMENTAL PART
Material
For the experimental testing of a sample of gunpowder
and two samples of polymer materials are selected:
- Double base propellant, NGB-051, MBL series 0916;
- Nylon 12 (polyamide 12) and
- Polymethylmethacrylate.

The most important standard, which describes the testing


and assessment of chemical compatibility, is STANAG
4147 [3]. According to this standard, the purpose of a
compatibility test is: "to provide evidence that a material
may be used in an item of ammunition without detriment
to the safety or reliability of an explosive with which it is
in contact or proximity".

Microcalorimetry method
The tests were performed on heat flow calorimetry TAM
III, TA Instruments. Samples made of propellant,
polymer materials and mixtures were heated for 168 hours
at 85C (STANAG 4147, test 2) The released heat, over
time, for a mixture of double base propellant and
polymeric material, is compared with reference value,
which represents the sum of the heat released when these
materials are heated separately.

This STANAG describes the following tests and


procedures:
Vacuum stability (Test 1)
Heat flow calorimetry (Test 2)
Thermogravimetric analysis (Test 3)

The method of differential scanning calorimetry

Differential scanning calorimetry (Test 4)

Determination of thermal characterization was carried out


in the temperature range of 100C to 300C at the speed of
heating 2C/min in the DSC Q 20, TA Instruments.
(STANAG 4147, test 4). Comparing the peak maximum
decomposition of propellant with maximum peak
decomposition mixture of propellant and polymeric
material alows the judgement of compatibility.

Chemical analysis after ageing (Test 5)


For a long time the vacuum stability test was the most
applied. The criteria for compatibility, according to this
test, are valid until the application and other criteria, or
other methods for compatibility testing.
All spontaneous chemical and physical processes are
associated with heat effects. Monitoring the flow of heat
was used to study the compatibility of explosive and
polymer materials by using heat flow calorimeter [1, 4-6].
This method is widely accepted in laboratories engaged in
the study of polymer compatibility and explosive
materials [1].

Methods of chemical analysis after aging


Methods of chemical analysis is conducted in accordance
with STANAG 4147 (test 5), specifying the content of the
stabilizer, after aging 14 days at 80C, samples and
samples of double base propellant mixtures with
polymeric materials. Aging was conducted in a thermal
block Julius Peters, and determination of the stabilizer
was performed on Waters 1525 liquid chromatograph.

Thermal analytical techniques such as DSC, TGA and


differential thermal analysis (DTA), use small samples
267

DETERMINATIONOFCOMPATIBILITYOFDOUBLEBASEPROPELLANTWITHPOLYMERMATERIALSUSINGDIFFERENTTESTMETHODS

D = 2 M
E+S

Method of vacuum stability test


Tests were carried out on the device vacuum stability test,
Stabil 20, OZM Research, in accordance with STANAG
4147 (test 1, Procedure B) heating the samples of double
base propellant, polymeric materials and their mixture for
96 hours at 90C. The method is based on the principle of
monitoring, i.e. determining the volume of released gases,
which are developed in a closed system under vacuum,
from the above samples.

heat generation of the mixture, /g ;


heat generation of the propellan , /g ;
S heat generation of the polymer, /g .
When:
D < 2 the mixture is considered to be compatible;
D > 3 the mixture is considered to be incompatible;
-2 < D < 3
the mixture is considered "moderately"
incompatible (it is necessary to apply other methods of
determining the compatibility).

Microcalorimetry method
Analysis of results of individual measurements was
performed for the release of heat, or heat flow of the
double base propellant, polymeric materials and mixtures
thereof, as well as the calculation of the theoretical
curves, Figures 2 and 3.

Table 1 shows the calculated values of the energy for the


propellants NGB-051 and its mixtures with polymer
materials, and based on that calculated the relative
compatibility D.

200

able 1. Values of energy released and the relative


compatibility

180
160

( W / g )

(1)

where:

THE TEST RESULTS AND ANALYSIS OF


RESULTS

140
120

NGB-051/Nylon 12
NGB-051
Nylon 12
Theoretical NGB-051/Nylon 12

100
80

Heat flow

OTEH2016

60
40
20
0
0

20

40

60

80

100

120

140

160

180

Time ( hours )

200
180

( W / g )

140

NGB-051/PMMA
NGB-051
PMMA
Theoretical NGB-051/PMMA

Heat flow

80

120

60
40

0
40

60

80

100

120

140

160

91.16
23.47
2.17
8.88
23.47
0.77

7.11

0.73

These results can be accepted, because microcalorimetry


method is the most confidential method for compatibility
for powder and rocket propellants with test materials.

20

20

NGB-051/Nylon 12
NGB-051
Nylon 12
NGB-051/PMMA
NGB-051
PMMA

Relative
compaty
D

Samples of double base propellant are stabile according to


the criteria STANAG 4582 by using microcalorimetry
method. The heat released by the samples is not over than
201 W/g during 168 hours on the temperature of 85C.
According to this result it can be said that incompatibility
is not the reason of powder instability. Moreover, it is
caused by interaction of energetic and polymer materials.

100

Energy
released, J/g

Applying the criterion of the relative compatibility it can


be concluded that the propellant NGB-051 incompatible
with the polymeric material of nylon 12 (D> 3), and is
compatible with PMMA (D< 2).

Figure 2. Heat flow curves of components and mixtures


NGB-051/12 Nylon

160

Sample

180

Time ( hours )

The method of differential scanning calorimetry

Figure 3. Heat flow curves of components and mixtures


NGB-051/PMMA

Figures 4-6 present the DSC thermograms of the samples


of double base propellant, polymer and their mixture.

By comparing the theoretical and experimental curves of


heat flow a good agreement is observed with the mixture
NGB-051/PMMA, while a significant disagreement is
evident at mixture NGB-051/nylon 12.

On the DSC thermogram of polymer material only


endothermic peak is present, i.e. peak of melting point.
There is no exothermic peak, i.e. peak which represents
the process of decomposition.

Based on the results of measurements the energy released


per unit mass for energy materials, polymeric materials
and mixtures thereof is determined. Based on these results
we calculated the relative compatibility [D], given with
equation 1.

On the DSC thermogram of double base propellant and


mixture of propellant and polymer material there exist
only exothermic peak of decomposition.
268

DETERMINATIONOFCOMPATIBILITYOFDOUBLEBASEPROPELLANTWITHPOLYMERMATERIALSUSINGDIFFERENTTESTMETHODS

OTEH2016

material provokes the degradation of energetic material


and that cause decrease of the maximum peak of
decomposition.
When:
- < 4

the mixture is considered to be


compatible;
- 4 < < 20
the mixture is considered
"moderately" incompatible (it is necessary
to apply other methods of determining the
compatibility);
- > 20 the mixture is considered to be
incompatible.

The values of maximum peak (max), the maximum


difference in peak temperatures () and compatibility
assessment are showed in table 2.
Table 2. Results of the DSC method

Figure 4. The DSC thermograms of Nylon 12

Sample
NGB-051
NGB-051/Nylon 12
NGB-051/PMMA

Point of
exothermic
peak, Tmax ,

190.95
190.23
191.8

Compatibility
assessment

0.72
-0.85

Compatible
Compatible

In table 2 it can be seen that value of has the negative


result. The explanation for this result is not the decreases
of maximum peak of decomposition of mixture as it is
expected, it is opposite.
According the criteria and the results, samples of mixture
of double base propellant and polymer material are
considered as compatible.
Methods of chemical analysis after aging
Results of content of stabilizer of aging and non aging
samples of double base propellant and mixture of
propellant and polymer materials are shown in table 3.

Figure 5. The DSC thermograms of double base


propellant NGB-051

Table 3. Results of the Methods of chemical analysis


Content of
stabilizer,
mass. %;
NGB-051, aging, ()
1.58
NGB-051, non aging, (C)
0.87
NGB-051/ Nylon 12, (B)
0.62
NGB-051, aging, ()
1.58
NGB-051, non aging, (C)
0.87
NGB-051/PMMA, (B)
1.13
Samples

The
compatibility
factor,
1.35

0.63

For calculating the compatibility factor () according to


this method, we use equation (2):

K = A B
C
Figure 6. The DSC thermograms of mixture NGB051/Nylon 12

(2)

where:
content of stabilizer in a non aging sample of double
base propellant, mass. %;
B content of stabilizer in a sample of propellant which is
in a contact with polymer materials after heating,
mass. %;

The temperature of peak maximum was determined and


the maximum difference in peak temperatures T was
calculated by using Thermal Stability Kinetics Analysis
for double base propellant and mixture of double base
propellant and polymer materials. Presence of test
269

DETERMINATIONOFCOMPATIBILITYOFDOUBLEBASEPROPELLANTWITHPOLYMERMATERIALSUSINGDIFFERENTTESTMETHODS

OTEH2016

C content of stabilizer in a aging sample of double base


propellant, mass. %.

VR = 5 cm3 it is necessary to apply other methods of


determining the compatibility.

When K 1.5 the energetic material is compatible with


contact material.

Table 4. Results of the Method of vacuum stability test


Samples

It can be concluded that double base propellant NGB-051


is compatible with polymer materials by using the
methods of chemical analysis after aging, according to the
results. The value of content of stabilizer is a bit high for
the mixture of propellant and PMMA than it is in an aging
powder, table 3. The explanation for this result: PMMA
was melt; the extraction and the determined of content of
stabilizer were not so easy and according to this, value of
content of stabilizer is increased.

The results of the evolved gas volumes of all samples and


their mixtures and the results of criteria of compatibility,
VR, are shown in table 4.

p 273 p 273 1
(3)

m 273
+ t 273 + t 1, 013
i

NGB-051/PMMA
NGB-051
PMMA

1.705

12.612

2.864
0.501

1.776

2.864
0.617

CONCLUSION

Where are:
V - the evolved gas volume, cm3

Four different methods were used for determining the


compatibility of double base propellant and polymer
materials (nylon 12 and PMMA) according to STANAG
4147.

V - the volume of transducer, cm3


Vt - the volume of glass test tubes, cm3

The results show that double base propellant is


incompatible with nylon 12 and compatible with PMMA
when using microcalorimetry method and method of
vacuum stability test.

mi - mass of all examination samples, g


i - density of all examination samples, g/cm3
p1 - pressure at the beginning of experiment, bar
p2 - pressure at the end of experiment, bar
t1 - temperature at the beginning of experiment, C
t2- temperature at the beginning of experiment, C.

On the other hand, propellant is compatible with both


polymer materials when using the other two methods, the
differential scanning calorimetry and chemical analysis
after aging. According to these results it can be concluded
that these two methods cannot be used for sure for giving
the opinions about compatibility of double base propellant
and polymer materials.

By knowing the values of the evolved gas volumes for the


all analyzed samples from equation (3), a criterion of
compatibility is calculated according to equation (4):
VR = M ( E + S )

15.977

It is approved that double base propellant NGB-051 is


chemically stable, using the same method. For the mass of
2.5 g of this propellant the evolved gas volume is
2.8 cm3, calculating on a volume per gram 1.12 cm3/g.
According to the criteria of chemical stability the evolved
gas volume of samples should be less than 1.2 cm3/g to be
considered as chemically stable.

The evolved gas volumes, on the standard pressure and


temperature, are calculated according to equation (3):

NGB-051/Nylon 12
NGB-051
Nylon 12

VR, cm3

The results obtained using the method of vacuum stability


test show that double base propellant NGB-051 is
incompatible with nylon 12 and compatible with PMMA.

Method of vacuum stability test

V = Vc + Vt

The evolved gas volume,


cm3

Reason for this lies in a fact that double base propellants


are not homogeneous samples and the method of
differential scanning calorimetry uses a very small mass
of examination samples (approximately 1-2 mg) and all of
that may affect the result.

(4)

Where are:
VR the evolved gas volume effected as a reaction of the
compounds in a mixture;
M the evolved gas volume of the mixture of energetic
and polymer materials, mixed in a mass ratio (2.5 +
2.5) g;
E the evolved gas volume of the energetic material,
mass of the samples 2.5 g;
S the evolved gas volume of the examination material
(polymer material), mass of the samples 2.5 g;

Also, in the method of chemical analysis after aging at


elevated temperature there may occur the of agglutination
of the sample of the polymer material with the double
base propellant, which leads to the occurrence of
inaccuracies in the determination of the content of
stabilizers, due to the impossibility of total separation of
the propellant of the polymeric material.
Given the differences obtained by multiple testing
methods, it is necessary to be, very careful in making a
final judgment on compatibility. This only testifies to the
fact that the chemical compatibility is the complex
parameter and it is always necessary to conduct several
different methods, and to interpret the results carefully.

Where:
VR < 5 cm3 - the mixture is considered to be compatible
VR > 5 cm3 the mixture is considered to be incompatible
270

DETERMINATIONOFCOMPATIBILITYOFDOUBLEBASEPROPELLANTWITHPOLYMERMATERIALSUSINGDIFFERENTTESTMETHODS

OTEH2016

[6] Stankovi,M., Anti,G., Blagojevi,M., Petrovi,S.:


Microcalorimetric Compatibility Testing of the
Constituents of Combustible Materials and Casting
Composite Explosives, Journal of Thermal Analysis,
52 ,1998, pp.581-585
[7] Staub,H.M., Reich,H.U.: Loss Prevention by Thermal
Compatibility Tests, AD-POO4 456/O/HDM, 1982,
pp.271-280.
[8] Liu,Z.R.: The Characteristic Temperature Method to
Estimate Kinetic Parameters From DTA Curves and
to Evaluate the Compatibility of Explosives,
Propellants, Explos., Pyrotech., Vol.11 No.1 ,1986,
pp.10-15.
[9] De Klerk,W.P.: Thermal Analysis of Some
Propellants and Explosives with DSC and TG/DTA,
AD-A320 678/6/HDM, 1996.
[10] Boykin,T.L., Moore,R.B.: The role of specific
interactions and transreactions on the compatibility
of
polyester
ionomers
with
poly(ethylene
terephthalate) and nylon 6,6, Polymer Engineering
& Science, 1998, Vol.38 No.10, pp.1658-65.

References
[1] Klerk,W., Meer,N.V., Eerlingh,R.: Microcalorimetric
study applied to the comparison of compatibility
tests (VST and IST) of polymers and propellants,
Thermochimica Acta, Vol.269-270, 1995, pp.231243.
[2] Vogelsanger,B.: Chemical Stability, Compatibility
and Shelf Life of Explosives, Chimia, 58, No.6,
2004, pp.401-408.
[3] STANAG 4147 (Edition 2), Chemical Compability of
Ammunition Componentes with Explosives (NonNuclear Application), June 2001.
[4] Elmqvist,C.J.: Lagerqkvist, P.E., Svensson, L.G.,
Stability
and
compatibility
testing
using
microcalorimetric method, J. Hazaardous Materials
,1983, pp.281-290
[5] Svensson,L.G.,
Forsgren,C.K.,
Backman,P.O.:
Microcalorimetric methods in shelf life technology,
Symposium on Compatibility of Plastics and Other
Materials with Explosives, Propellants and
Pyrotechnics, 1988, pp.132-137.

271

CHARACTERIZATION OF BEHIND ARMOR DEBRIS AFTER


PERFORATION OF STEEL PLATE BY ARMOR PIERCING PROJECTILE
PREDRAG ELEK
University of Belgrade - Faculty of Mechanical Engineering, Belgrade, pelek@mas.bg.ac.rs
SLOBODAN JARAMAZ
University of Belgrade - Faculty of Mechanical Engineering, Belgrade
DEJAN MICKOVI
University of Belgrade - Faculty of Mechanical Engineering, Belgrade
MIROSLAV OREVI
Technical Test Center, Belgrade
NENAD MILORADOVI
Ministry of Defense, Materials Resources Sector, Belgrade

Abstract: Characterization of behind armor debris (BAD) is of utmost importance from the aspect of evaluation of an armor
piercing projectile efficiency as well as from the point of target vulnerability. In the present study, the research focus is on
the size, mass and shape distribution of fragments generated from the target plate. The experimental results of perforation of
6.35 mm thick steel plates by 12.7 mm armor-piercing projectile have been reported. Projectile impact and residual velocity
were measured by the muzzle velocity radar system and make-up screens, respectively. Distribution of a generated BAD
cloud and its energetic properties were determined using the calibrated witness pack. The proposed analytical model of the
problem is based on the energy conservation, fracture mechanics and fragmentation theory. A good agreement is observed
between calculated results and experimental data in terms of the shape, characteristic size and mass distribution of
generated fragments. The further research will be focused on the spatial distribution of fragments, as well as on the
distribution of their velocities.
Keywords: Behind armor debris (BAD), armor piercing projectile, perforation, fragmentation, modeling.

Two types of experimental techniques have been developed


to measure the BAD properties: (i) use of witness packs
behind the armor (several layers of thin metallic plates in
which the number and nature of the resulting holes can be
measured), and (ii) flash radiography (visualization of the
debris cloud and determination of the position and velocity
of particles). The former method is simpler and less
expensive, but provides limited data, whereas the latter
approach enables much more information at the expense of
more complex and high-cost experiments.

1. INTRODUCTION
When a projectile or shaped charge jet impacts and
subsequently perforates a target plate (armor), there will
often be a cloud of debris around the residual projectile
behind the plate [1]. The greater projectile impact velocity
and the higher hardness and brittleness of the target
material, the effect will be more pronounced. The cloud is
generally consisted from the fragments of the residual
penetrator, as well as from the fragments originating from
the armor (plug and spall). These fragments are referred to
as behind-armor debris (BAD). The particles of BAD may
hit and injure personnel or damage components inside an
armored vehicle.

Modeling of this complex problem obviously requires a


good understanding of the underlying penetration mechanics
and the statistical nature of the phenomenon. A variety of
BAD related models exist that can reproduce or predict
experimental results [2-8]. These models usually rely
heavily on experimental data and empirical relationships
and are of limited use for modeling the specific
projectile/target interaction.

Therefore, a complete evaluation of the effectiveness of the


projectile or the armor may be assessed with knowledge of
the characteristics of BAD. To be able to predict the effect
of an impact of BAD, one has to estimate the number of
fragments, as well as the distributions of their size, mass and
shape. Also, the spatial distribution of fragments and their
velocity distribution is of great importance.

The objective of this study is to present a new analytical


model of BAD and to compare the theoretical results with
data obtained from the experimental research.
272

CHARACTERIZATIONOFBEHINDARMORDEBRISAFTERPERFORATIONOFSTEELPLATEBYARMORPIERCINGPROJECTILE

OTEH2016

determination of velocity of the fastest object behind the


target plate; it can generally be the core of projectile, a
fragment from projectile jacket, or a fragment from the steel
plate. Approximately the same values of residual velocity
were measured using three variants of contact screens: with
a thin layer of insulating paper, and with the styrofoam
layers with 10 mm and 20 mm thickness. Considering that
only projectile core has enough length to make the contact
between two copper foils separated by 20 mm thick
styrofoam, it has been concluded that the core has the
highest velocity. The average value of the core residual
velocity from six shots with approximately the same impact
velocity is 714.5 m/s with dispersion 5 m/s.

2. EXPERIMENTAL INVESTIGATION
In order to investigate phenomena related to BAD formation
and effect, the comprehensive experimental program was
conducted and described in detail in [9]. The schematic
representation of the experimental setup is shown in Fig. 1.

The same technique has been employed for measurement of


the maximum fragment velocity, but with central circular
openings in the screens through which the projectile passes.
The average value of the fragment velocity of 498 m/s was
determined from three shots.

Figure 1. Scheme of the experiment


Armor plates with thickness of 6.35 mm have been
perforated by 12.7 mm armor piercing incendiary (API)
projectile B32. The core of the projectile is made of highstrength steel alloy and jacketed with steel. The square
target plate of 200 mm side is made of armor steel (HPA10). Relevant properties of the bullet and target plate are
summarized in Tables 1 and 2, respectively.

The opening created in the target plate was approximately


cylindrical with average diameter of 14.12 mm.
Characterization of the produced BAD in terms of their size,
shape, mass, spatial distribution and kinetic energy was
performed using a pack of metal witness plates with
transversal dimensions 600 mm x 600 mm. The witness
pack consists of four layers: the first two plates are
produced from aluminum of 1 mm thickness, the third layer
is made from 2 mm thick aluminum, and the remaining
fourth plate is made from 1.5 mm thick steel. Metal plates
are separated with 20 mm thick layers of styrofoam. After
plate perforation, the BAD impact the witness pack creating
multiple perforations. The first plate of the witness pack
records the number of fragments with impact energy
sufficient for its perforation. Also, the position of
perforations and the area of created openings can be
determined by analysis of digitized image of the first plate.
The witness pack is disassembled in order to determine the
level of perforation (the last perforated layer) for each
fragment, which is related to its impact energy. Fragments
arrested in the styrofoam are carefully recovered using
powerful magnets and pincette. Afterwards, the mass and
dimensions of collected fragments have been measured.

Table 1. Basic properties of the projectile


Bullet type
API B32
Caliber
12.7 mm
Initial velocity
810...845 m/s
Projectile mass
47 g
Core mass
30 g
Core diameter
10.9 mm
Core length
52.5 mm
Core hardness
HRC 61
Table 2. Properties of the target
Target type
Armor plate HPA-10
Thickness
6.35 mm
Hardness
HB 444
Yield strength
1420 MPa
Ultimate tensile strength
1641 MPa
Elongation
12%
Fracture energy
4.5 J/cm2

Experimental results and their comparison with theoretical


predictions have been shown in subsequent sections.

Projectiles have been fired from the long-range anti-materiel


rifle M93 Black Arrow fixed in a universal stand. An
optimum target plate distance from the muzzle of 380 mm is
selected, providing normal impact and control of the
placement of impact points on the target. Impact velocity of
the projectile is measured by the radar, and the average
value from six shots is 826.5 m/s with dispersion of 6 m/s.

3. THEORETICAL CONSIDERATIONS
Following approach proposed by Dinovitzer et al. [10], the
energy transferred to the target plate which leads to its
fragmentation Ef is a portion of the projectile impact energy
and can be determined from the energy conservation law:

The residual velocity of projectile and BAD approximately


200 mm behind the target was measured using make-up
(contact) screens. The functioning of make-up screens is
based on the measurement of time of flight. The contact
screen consists of two 0.35 mm thick copper foils separated
by an insulating layer of paper or styrofoam. The first
contact screen opens and the second closes the electrical
circuit by passage of the projectile (or fragment) through the
detector, enabling the measurement of time of flight and
calculation of velocity. This technique enables

mv02
= Erp + Erf + Ebl + Ef
2

(1)

The expression on the left-hand side of eq. (1) presents the


kinetic energy of a projectile of mass m and impact velocity
v0, which is converted into four energy forms. The residual
kinetic energy of projectile Erp is simply defined by:
Erp =
273

mvr2
2

(2)

CHARACTERIZATIONOFBEHINDARMORDEBRISAFTERPERFORATIONOFSTEELPLATEBYARMORPIERCINGPROJECTILE

lengths are a, b, and c (a b c), then the fragment shape


parameters p and q can be defined as:

where vr is projectile residual velocity after perforation. The


residual kinetic energy of fragments which form the BAD
cloud and originate from the target plug can be written as:
Erf =

p=

mplug vf2

(3)

Ebl = 2

f ( x) =

mv
2

(4)

(5)

p =

(10)

b
c
, q =
a
a

(11)

then the mass and the surface area of the "average" fragment
can be expressed as:

(6)

It will be conditionally assumed that, within a certain limits


related to impact velocity and the shape of the projectile, the
fracture property G is a target material dependent constant.
Having in mind the basic geometry of target plate
perforation, the total surface area of generated fragments Af
can be determined from the relation:
2 Ac = Af + Acyl ( Aentry + Aexit )

( )( )
( + )

If we assume that determined distributions of the shape


parameters p and q are representative for the particular
projectile/target system and the considered domain of
impact velocities, then the characteristic fragment size and
the total number of fragments can be evaluated. If a , b and
c are the average values of fragments' sides, and the
corresponding ratios are:

Based on fracture mechanics, the cracking area Ac in the


plug zone, which is associated with multiple fracture and
fragmentation that leads to BAD formation, is the ratio of
the available fragmentation energy Ef and the rate of energy
consumption per unit area crack growth G:
Ef
G

(9)

where (z) is the gamma function. The characteristic values


of the beta distribution parameters and for both fragment
shape parameters p and q can be determined from
experimental data.

Ac =

x 1 (1 x) 1
, 0 x 1
( , )

( , ) =

The energy available for plug fragmentation Ef can be


evaluated from Eqs. (1 4):
mv02 mvr2 mplug vf
mvbl2

2
2
2
2
2

(8)

where >0, >0 are the pdf shape parameters, and the beta
function (, ) is defined by

where vbl is the ballistic limit velocity, and parameter


=0.971. It should be noted that the ballistic limit energy is
consumed in the processes of separation of the plug from
the target and its deformation and heating. When impact
velocity v0 is equal to the ballistic limit velocity vbl, the
residual energies Erp and Erf are equal to zero. The ballistic
limit velocity is a property of the penetrator/target system
and can be determined experimentally, analytically or
numerically.

Ef =

b
c
, q = , 0 < q p 1
a
a

After consideration of possible distribution function of


fragment shape parameters, p and q, and preliminary
analysis of experimental data, it is concluded that the beta
distribution provides much better results than "slot machine"
model [11] and 2D Mott's model [12]. The probability
density function (pdf) of the beta distribution is:

where mplug is the mass of plug and vf the average velocity


of generated fragments. The third energy term Ebl
corresponds to the ballistic limit energy, i.e. minimum
energy needed for complete perforation of the target. In the
first approximation, this energy is not dependent on impact
velocity and can be calculated as:
2
bl

OTEH2016

3
mf = abc = pqa

(12)

)a2
Af = 2(ab + bc + ca ) = 2 ( p + q + pq

(13)

If the total number of generated fragments is N0, then their


total mass and surface area should satisfy the following
conditions:

(7)

where Acyl, Aentry and Aexit are the area of cylindrical opening
created in the target plate, entry and exit circular area,
respectively.
The next step in modeling of fragmentation and creation of
BAD consists in establishing a relation between the
calculated total surface area of generated fragments Af and
number of created fragments and their characteristic
fragment size. Although it is known that fragments
generally have irregular shape, we will adopt the common
assumption that they are approximately in form of a
rectangular parallelepiped, e.g. [2], [3]. If the fragment side

3 = mplug
N 0 pqa

(14)

) = Af
2N 0 a 2 ( p + q + pq

(15)

where is target material density. From Eqs. (14) and (15),


the average fragment length and the total number of
fragments can be calculated:
a=

274

2mplug p + q + pq

Af
pq

(16)

CHARACTERIZATIONOFBEHINDARMORDEBRISAFTERPERFORATIONOFSTEELPLATEBYARMORPIERCINGPROJECTILE

N0 =

2 Af3
2
plug

8m

p 2 q 2
)
( p + q + pq

The histograms of experimentally determined fragment


shape parameters p and q, along with corresponding
approximations with beta distributions are shown in Figures
3 and 4, respectively. The optimized values of the
parameters of the beta distributions are: p=7.606, p=5.054,
and q=13.367, q=20.860, for shape factors p and q,
respectively. As can be observed from the diagrams, the
beta distribution provides very good approximation of
experimentally obtained fragment shape parameters.

(17)

The mass distribution of generated fragments is usually


expressed in the form of the cumulative mass distribution
N(m)=N(>m), which defines the number of fragments that
have individual mass greater than m. There are many
functions that are potential candidates for this distribution,
which are thoroughly analyzed in [13] and [14]. The
analysis of available experimental results shows that
exponential distribution (also known as Lienau or GradyKipp distribution) provides the best correlation with
experimental data:
m
N (m) = N 0 exp

OTEH2016

Application of the energy conservation equation (5) gives


the fragmentation energy of Ef=61 J. Here we have used the
measured values of the projectile mass m=47 g, the
projectile impact velocity v0=826.5 m/s, the projectile
residual velocity vr=714.5 m/s, the plug mass mplug=7.8 g
(calculated from measured diameter of cylindrical hole in
the plate of 14.12 mm), average fragment velocity vf=498
m/s, and adopted value of the parameter =0.97.

(18)

where is the distribution parameter that corresponds to the


average fragment mass.

4. RESULTS AND ANALYSIS


At the beginning of the analysis of results, we will consider
the shape of generated fragments. The mass of a fragment
can be expressed as the function of its side lengths as:
m = k abc

(19)

where parameter k depends on the fragment shape; e.g. for


ellipsoid k=/6, for cylinder k=/4, and for parallelepiped
k=1. Figure 2 presents experimental data comparing the
weighed fragment's mass and the product of its measured
dimensions multiplied by density for 145 fragments
generated and collected from three shots. The data
obviously exhibit the linear dependence, and the least square
method fit gives an optimum line slope defined by k=1.064.
This result is remarkably close to unity, which justifies the
assumption of the parallelepipedic shape of fragments.

Figure 3. Distribution of the fragment shape parameter p:


comparison of experimental data and the optimized beta
distribution

Figure 2. Fragment mass as a function of abc for 145


collected fragments. A linear fit yields the slope of the
line defined by k=1.064.

Figure 4. Distribution of the fragment shape parameter q:


comparison of experimental data and the optimized beta
distribution
275

CHARACTERIZATIONOFBEHINDARMORDEBRISAFTERPERFORATIONOFSTEELPLATEBYARMORPIERCINGPROJECTILE

OTEH2016

The ballistic limit velocity is calculated using the


numerically based empirical relation of Rosenberg and
Dekel [15], vbl=370 m/s, which is very close to the
experimental data of similar experiment [16].
The calculated value of the total area of generated fragments,
eqs. (6) and (7), is Af=2.51103 mm2. Using the experimentally
determined dimensionless fragment shape parameters
p = 0.6043 and q = 0.3986 , the evaluated average fragment
size and expected number of fragments, from eqs. (16) and
(17), are a = 4.12 mm and N0=59, respectively.
The theoretical prediction of the average fragment size is
very close to the experimental value of 4.64 mm, especially
having in mind that a certain number of small size
fragments has not been collected. Also, the average number
of perforations in the first plate of a witness pack was 53,
which does not take into account a significant number of
dents and surface scratches resulted from ricocheting
fragments of low mass.
It is very important to analyze the distribution of the initial
impact energy of the projectile, which is depicted in Fig. 5.
The majority of this kinetic energy (in the present
experiment about 75%) is retained by the penetrator in the
form of its residual energy. The ballistic limit energy and
the residual kinetic energy of the fragments contribute with
about 19% and 6%, respectively. Hence, the remaining
energy available for fragmentation Ef is surprisingly low
below 1% (in the considered experiment approximately
0.4% ) of the projectile impact energy. This also means that
the calculation of the fragmentation energy is very sensitive
to the change of main input parameters vr, vf, and vbl.

Figure 6. The mass distribution of generated fragments:


the comparison of experimental data and the optimized fit
with the exponential (Grady-Kipp) distribution

Spatial distribution of fragments can be presented by the


fragment dispersion angle, defined as the acute angle
between the fragment straight-line trajectory and the
projectile impact velocity direction. This angle can be
readily accessed by the analysis of data (centers of
perforation holes) for the first plate of a witness pack.
Figure 7 presents the histogram of experimentally obtained
fragment dispersion angles. The average number of
fragments that perforate the first plate of the witness pack is
53 (from three experiments) and their dispersion angle
obviously exhibits a Rayleigh-like distribution. The
maximum dispersion angle observed was 19.4, but more
than 90% of formed fragments fall within the angle of 10.
The proposed theoretical model currently cannot predict
spatial distribution of generated fragments.

The comparison of the mass distribution of 145 collected


fragments and the suggested exponential distribution law is
shown in Fig. 6. A very good agreement between
experimental data and the theoretical model can be
observed. The optimum value of the parameter (which is
equal to the average fragment mass) is 0.162 g. This value is
between the calculated average fragment mass
3 = 0.131 g , and experimentally determined
mf = pqa
value of 0.220 g.

Figure 5. Distribution of the projectile impact energy


(100%) among the projectile residual energy Erp, the
fragment residual energy Erf, the ballistic limit energy Ebl,
and the fragmentation energy Ef

Figure 7. Experimentally obtained spatial distribution of


fragments originating from target plate
276

CHARACTERIZATIONOFBEHINDARMORDEBRISAFTERPERFORATIONOFSTEELPLATEBYARMORPIERCINGPROJECTILE

5. CONCLUSION
[7]

Characterization of behind the armor debris (BAD) is of


great importance for the assessment of a projectile or armor
effectiveness. A new analytical model for BAD
characterization is developed on the basis of the energy
conservation law, fracture mechanics and fragmentation
statistics. Using a very limited number of experimentally
determined parameters, model enables evaluation of the
total number of generated fragments, as well as their
average size and mass distribution. The calculated results
agree very well with the data obtained from comprehensive
experimental investigation.

[8]

[9]

These promising results motivate a further research and


improvement of the model, which will be focused on
treatment of the fragments' spatial distribution and their
velocity distribution.

[10]

References

[11]

[1] Hartmann, M.: Behind armor debris Modelling and


simulation A literature review, FOI Swedish
Defence Research Agency, Report FOI-R-1678-SE,
2005.
[2] Mayseless, M., Sela, N., Stilp, A.J., Hohler,V.: Behind
the armor debris distribution function, 13th
International Symposium of Ballistics, Stockholm,
Sweden, pp. 77-85, 1992.
[3] Saucier, R., Shnidman, R., Collins, J.C.: A stochastic
behind-armor debris model, 15th International
Symposium on Ballistics, Jerusalem, Israel, pp. 431438, 1995.
[4] Dinovitzer, A. S., Szymczak, M., Brown, T.: BehindArmour Debris Modeling, 17th International
Symposium on Ballistics, Midrand, South Africa, pp.
3-275-3-284, 1998.
[5] Yossifson, G., Yarin, A.L.: Behind-the-armor debris
analysis, International Journal of Impact Engineering,
27(8) (2002) 807-835.
[6] Arnold, W., Rottenkolber, E.: Physics of behind armor
debris threat reduction, International Journal of Impact

[12]

[13]

[14]

[15]
[16]

277

OTEH2016

Engineering, 33(1-12) (2006) 53-61.


Kim, H.S., Arnold, W., Hartmann, T., Rottenkolber,
E., Klavzar, A.: A model for behind armor debris from
EFP impact, 26th International Symposium on
Ballistics, Miami, FL, pp. 1410-1419, 2011.
Verolme, J.L., Szymczak, M., Broos, J.P.F.: Metallic
witness
packs
for
behind-armour
debris
characterisation, International Journal of Impact
Engineering, 22(7) (1999) 693-705.
orevi, M.: Characterization of the phenomena
behind a metal plate impacted by an armor piercing
small arms projectile (in Serbian), M.Sc. thesis,
University of Belgrade Faculty of Mechanical
Engineering, 2004.
Dinovitzer, A.S., Szymczak, M., Erickson, D.:
Fragmentation of targets during ballistic penetration
events, International Journal of Impact Engineering,
21(4) (1998) 237-244.
Curran, D.R.: Simple fragment size and shape
distribution formulae for explosively fragmenting
munition, International Journal of Impact
Engineering, 20(1-5) (1997) 197-208.
Grady, D.: Fragmentation of Rings and Shells: The
Legacy of N.F. Mott, Ch. 6: Application to the Biaxial
Fragmentation of Shells, Springer, Berlin, 2006.
Elek, P., Jaramaz, S.: Fragment size distribution in
dynamic fragmentation: Geometric probability
approach, FME Transactions, 36(2) (2008) 59-65.
Elek, P., Jaramaz, S.: Fragment mass distribution of
naturally fragmenting warheads, FME Transactions,
37(3) (2009) 129-135.
Rosenberg, Z., Dekel, E.: Terminal ballistics, Springer,
Berlin, 2012.
Dinovitzer, A.: Debris characterisation and modelling
(DeCaM) software improvements, DREV-CR-2003091, Defence Research Establishment Valcartier,
Canada, 1999.

VISUALIZING THE THERMAL EFFECT OF THERMOBARIC


EXPLOSIVES
URO ANELI
Military Technical Institute, Belgrade, uroshcs@gmail.com
DANICA SIMI
Military Technical Institute, Belgrade, simic_danica@yahoo.com
DRAGAN KNEEVI
School of Electrical Engineering, Belgrade, Military Technical Institute, Belgrade
MARKO DEVI
Military Technical Institute, Belgrade

Abstract: Measuring the temperature of an explosion has always been a big challenge. In the last several decades many
different techniques were used for this purpose, but none of them were reliable enough. Very fast temperature rise and
fall time that occurs in explosions is very difficult to register with any kind of sensor. This paper presents a different
approach that uses a high-speed infrared camera to record the explosion temperature. Optimal camera setup was
achieved after a few attempts. The result was a detail infrared video of the explosion. Expansion of the products of
detonation is clearly visible in this video. Different sets of data were extracted from the infrared recordings, like
temperature-time and temperature-distance graphs. The results confirmed that the thermal effect of the thermobaric
explosives can be reliably measured with this technique.
Keywords: temperature, explosive, thermovision, infrared camera.
very short time, measuring it is a very difficult task. The
most advanced probes available (thermocouples) are
expensive, and cannot withstand the explosion if they are
actually in it, so they have to be used only at a certain
distance from the explosion [7]. Optical pyrometers, that
can measure the temperature from a distance, can only
measure surface temperatures at a certain point. They also
have to go through a complicated calibration to measure
the temperature of the explosion products like dust and
different gas products.

1. INTRODUCTION
Detonation is a process that usually lasts tens of
microseconds. During this time, all of the explosive
molecules turn to gaseous products of detonation. Then,
these products expand (explode), and that expansion can
last for hundreds of microseconds [1].
If the explosive composition [2] is thermobaric [3-5], i.e.
it contains a metal fuel (like Al, Mg, B) and an oxidizer
(like NH4ClO4), then there is a third process that occurs
after the expansion of the products of detonation aerobic
combustion of the metal fuel, which can last for tens of
milliseconds [2-5].

For these reasons there are not many published papers


that try to deal with this problem. With the advancements
in infrared imaging techniques, most importantly the
increase in infrared camera frame rates, a new approach to
temperature measurements became possible.

This post detonation aerobic combustion reaction gives


thermobaric explosives (TBE) improved incendiary and
blast effects, compared to conventional high explosives.
Shock waves generated by their detonation have a much
longer duration then those generated by conventional high
explosives, and also have a greater lethal radius [6]. In a
confined space, TBE can cause a series of reflected shock
waves, which make them highly effective for the
destruction of soft targets like tunnels, bunkers,
underground structures, buildings, field fabrications, etc.

This paper will describe the technique has been


established in Military technical institute through several
iterations of trial and error [8, 9]. Infrared recordings were
used to obtain different sets of data like temperaturetime
and temperature-distance graphs. Also, some of the
known characteristics of the products of detonation
expansion were noticed, which helped to interpret the
acquired images.

The longer the post detonation aerobic combustion lasts,


and the higher its temperature is, the better are the
destructive capabilities of the TBE.

Extracting temperaturetime graphs from infrared


recordings of explosions was also done by researchers
from China [10], but it was difficult to compare results of
this research with theirs. For example, their infrared

Since this temperature changes rapidly and it lasts for a


278

OTEH2016

VISUALIZINGTHETHERMALEFFECTOFTHERMOBARICEXPLOSIVES

recordings show that the explosion lasts a lot longer than


recordings presented in this work.

the direction of the detonation wave pointed directly to


the camera (Picture 2).

2. EXPERIMENTAL SAMPLES
The experiments that are presented in this paper were
done on 50 mm diameter cylindrical cast thermobaric
explosive charges. The charges were made with the
following components:
octogen (DINO Norway), according to MIL-H45444,
aluminium, according to MIL-STD-129,
magnesium (ECKA GRANULES Austria), according
to MIL-DTL-382D,
ammonium perchlorate, 7-10 m, obtained by grinding
200 m- on a vertical hammer mill ACM-10,
polymeric binder, based on hydroxyterminated
polybutadiene (Tanyun, China) cured by isophoronediisocyanate,
including
additives
(plasticizer,
antioxidant, and bonding agent) [11].
The % mass fraction of the components of the explosive
were HMX/AP/Al/Mg/HTPB = 45/10/21/9/15. After the
manufacturing process, the charges were removed from
their molds, and cut to 400 g samples. The density was
measured (MIL 286 B method, on the Mohrs scale in
toluene at 25C) to be 1.7 g/cm3, and the detonation
velocity was measured by electrocontact probes to be
7150 m/s.

Picture 1. Photo of a sample on a stand


The camera is calibrated to a black body, which means
that the temperature has to be corrected by entering the
emissivity of the recorded object in the software.
Unfortunately, emissivity of an explosion fireball proved
difficult even to estimate. The fireball is a mixture of
solid and gas particles, some of them reacting with others,
which changes the chemical composition of the system
really fast. Also, there is no spatial homogeneity, so there
are parts with higher or lower emissivity.

3. EXPERIMENTAL SETUP
Recordings of the explosions were made with a highspeed infrared (IR) camera FLIR SC7200. The camera
characteristics are given in Table 1. Software Altair was
used for data acquisition and analysis.

Because of this, the temperature that we measure is


actually lower than the real temperature. But, even though
we cant measure the absolute value of the temperature,
we can compare different measurements because the
emissivity is always the same for the same explosive
composition. For the same reason, the results cant be
compared with different techniques for measuring
explosion temperature.

Table 1. Characteristics of the FLIR SC7200 camera


Spectral range
1.5-5.1 m
Maximum screen resolution
320 256 px
Camera sensitivity (NETD)
< 20 mK
Aperture
F/3
Pitch
30 m
Technology
In-Sb
Objective
50 mm
Field of view angle of the objective
11 8.8
Total calibrated temperature range
5 2500 C
TR-1
5 300 C
Calibrated temperature sub
TR-2
300 1500 C
ranges
TR-3 1500 2500 C

Picture 2. Experimental setup

4. RESULTS AND DISCUSSION

Parameters like camera frequency, distance between the


camera and the explosive, and the field of view were
different for every experiment presented here, and will be
accentuated when needed. The camera frequency can only
be increased if the field of view (and the resolution) is
reduced.

Many high-speed infrared recordings were made before


the optimal experimental setup was determined. Here, we
present three recordings, each representing a step towards
the optimal setup.
4.1. Infrared recordings

The explosive was always placed 2 m from the ground


and the axis of symmetry was aligned towards the camera.
A photo of a sample is presented in Picture 1. Initiation of
the samples was done on the far end of the charge, so that

1) 174 Hz recording
The first recordings done with the FLIR camera were
used to get some insight into the IR scene of the explosion
279

OTEH2016

VISUALIZINGTHETHERMALEFFECTOFTHERMOBARICEXPLOSIVES

process. The largest field of view was used, with the


resolution of 320 256 px. The camera was placed 70 m
from the explosive. Recording frequency was 174 Hz,
which is 5.75 ms per frame, and the upper limit of the
temperature range was set to 130C. 6 frames are
presented in Picture 3.

ejection. This behavior is sketched in Picture 2. Since the


frontal ejection is aligned towards the cameras it is
recorded as the inner, circle shaped area. The same way,
the lateral ejection makes a torus shaped cloud of fuel and
oxidizer which is recorded as the outer, ring area. The two
areas are not differentiated too much - it is a pretty
smooth transition between the two, but it is enough to be
easily noticeable in the recordings.

Picture 3. 174 Hz recording, full resolution 320 256 px,


camera explosive distance 70 m

Picture 4. 655 Hz recording, resolution 160 128 px,


camera explosive distance 70 m

These frames show that the 174 Hz frame rate is too slow
and that the temperature limit needs to be much higher.
The first several frames mostly just show complete
camera saturation, after which only the cooling of the air
can be seen. There is no important information that can be
extracted from these frames.

These frames show that the 655 Hz frame rate is fast


enough to capture the aerobic combustion of the fuel, but
not enough to capture the initial expansion of the fuel
with the products of detonation.

2) 655 Hz recording

3) 1459 Hz recording

After several recordings with different camera setups we


realized that only the area 2 3 m around the center of the
explosion is interesting to record. The field of view was
narrowed down to 160 128 px, which allowed us to
record the explosion at 655 Hz (Picture 4). This frequency
is high enough that we can observe some interesting
details about the explosion. During these experiments, the
infrared camera had some dead pixels, but this didnt
impact the results.

In order to obtain a more detailed recording, the frequency


was increased to 1459 Hz, but the size of the frame had to be
reduced to 64 120 px. With such a small frame, only one
side of the explosion could be recorded, in this case the right
side. Also, the camera explosive distance was reduced to
50 m, which further increased the explosion details. The
stand was replaced with a small wooden pole on which the
explosive was impaled (Picture 5). Because of this the
products of detonation could expand in all directions and the
ring shaped area completed a full circle (Picture 6).

This experimental setup allows us to see two different


areas of combustion: the outer, ring shaped area and the
inner, circle shaped area. The ring shaped area does not
complete a full circle - on the bottom side there is a drop
in temperature. This occurs because the stand for the
explosive, shown in Figure 1 slows down the downward
expansion of the products of detonation. The inner, circle
shaped area burns for less time than the outer area simply
because it contains less fuel and oxidizer (Al, AP).
The two areas are not actually inside one another. They
are both at a certain distance from the center of the
detonation, and the circle shaped area is even a little
farther than the ring area. The explosive charge is
cylindrical which causes the products of detonation to
initially spread in two distinct ways - lateral and frontal

Picture 5. Photo of a sample on a pole


280

OTEH2016

VISUALIZINGTHETHERMALEFFECTOFTHERMOBARICEXPLOSIVES

through the explosion center) gives a temperaturedistance (T-x) graph that shows spatial temperature
distribution. This can only be done with a high-speed
infrared camera and no other technique. Picture 8 shows a
distribution where both the inner and the outer area are
present, and in Picture 9 only the outer area can be seen.

Picture 6. 1459 Hz recording, resolution 80 128 px,


camera explosive distance 50 m

Picture 8. Temperature-distance graph with both the


inner and the outer area

In order to increase the frequency even more, the size of


the frame would have to be reduced to 64 8 px which is
not enough to capture enough of the explosion. So the
optimal setup for recording explosions of TBE with this
camera is the one in Picture 6. This setup allows for the
most precise measurements, while having a frame size
thats big enough to fit at least half of the fire ball.
4.2. Extracted graphs
Every pixel on every frame can be considered as a
temperature measurement in a single point in time.
Taking the same pixel for multiple frames gives a
temperature-time (T-t) graph in the same way an optical
pyrometer or a thermocouple probe does. One T-t graph is
presented in Picture 7. There are actually three T-t curves
here: one that is from the center of the explosion, one at
0.8 m from the center and one at 1.6 m.

Picture 9. Temperature-distance graph with only the


outer area

5. CONCLUSION
High-speed infrared camera is shown to be a powerful tool
for visualizing the thermal effect of thermobaric explosives,
but in the same way for the explosives in general. Also,
temperature measurements can be made in different ways,
and a lot of data can be extracted from the recordings. These
measurements can be compared with each other but not with
a measurement made by a different instrument, because the
emissivity of the explosion cloud is unknown.
Figuring out this emissivity is a difficult task, but it would
enable determining the absolute temperature from the
recordings.
References

Picture 7. Temperature-time graph

[1] Jeremi, R., Explosions and Explosives (in Serbian),


Vojnoizdavaki Zavod, Belgrade, 2007.

On the other hand, considering only one frame, and taking


all of the pixels in one horizontal line (that is going
281

OTEH2016

VISUALIZINGTHETHERMALEFFECTOFTHERMOBARICEXPLOSIVES

[2] Simi, D., Petkovi, J., Milojkovi, A., Brzi, S.,


Influence of Composition on Thermobaric Explosives
Processability, Scientific Technical Review ,
Belgrade, 2013, 63(3), 3-8
[3] Agrawal, .P., High Energy Materials: Propellants,
Explosives and Pyrotechnics, WILEY-VCH,
Weinheim, 2010; ISBN 9783527326105; DOI:
10.1002/prep.201000098
[4] Chan, M.L., Meyers, G.W., Advanced Thermobaric
Explosive Compositions, US Patent No: US
6,955,732 B1, 2005.
[5] Chan M.L., Turner A.D., High Energy Blast
Explosives for Confined Spaces, US Patent No.
6,969,434, 2002.
[6] Wildegger-Gaissmaier, A.E., Aspects of Thermobaric
Weaponry, Military Technology, 2004, 28(6), 125130.
[7] Ludwig, C., Verifying Performance of Thermobaric
Materials for Small to Medium Caliber Rocket
Warheads,
Talley
Defense
Systems;
http://www.dtic.mil/ndia/2003gun/lud.pdf.
[8] Simi, D., Andjeli, U., Kneevi, D., Savi, K.,
Dragani, V., Sirovatka, R., Tomi, Lj., Thermobaric

Effects of Cast Composite Explosives of Different


Charge Masses and Dimensions, Central European
Journal of Energetic Materials, ISSN 1733-7178,
(2016), Vol. 13 No. 1, pp. 161-182
[9] Simi, D., Andjeli, U., Kneevi, D., Mikovi, K.,,
Gai, S., Terzi, S., Brzi, S., Quantification of
Thermal Effect of a Thermobaric Detonation by
Infrared Imaging Technique, 18th Seminar on New
Trends in Research of Energetic Materials NTREM
2015, Pardubice, Czech Republic, April 15-17,
Proceedings (ISSN 978-80-7395-891-6), pp 829-838,
2015.
[10] Chen, Y., Xu, S., Wu, D. J., Liu, D. B., Experimental
Study of the Explosion of Aluminized Explosives in
Air, Central European Journal of Energetic
Materials, ISSN 1733-7178, (2016), Vol. 13 No. 1,
pp. 117-134
[11] Brzi, S.J., Jelisavac, Lj.N., Galovi, J.R., Simi,
D.M., Petkovi, J.Lj., Viscoelastic Properties of
Hydroxyl-Terminated
Poly(Butadiene)
Based
Composite Rocket Propellants, Hemijska Industrija,
2014, 68(4), 435-443

282

RELIABILITY OF SOLID ROCKET PROPELLANT GRAIN UNDER


SIMULTANEOUS ACTION OF MULTIPLE TYPES OF LOADS
NIKOLA GLIGORIJEVI
Military Technical Institute, Rocket Armament Sector, Belgrade, nikola.gligorijevic@gmail.com
SAA IVKOVI
Military Technical Institute, Rocket Armament Sector, Belgrade, sasavite@yahoo.com
VESNA RODI
Military Technical Institute, Sector for Materials and Protection, Belgrade, vesna_rodic@vektor.net
SAA ANTONOVI
Military Technical Institute, Rocket Armament Sector, Belgrade, saleantonovic82@gmail.com
ALEKSANDAR MILOJKOVI
Military Technical Institute, Sector for Materials and Protection, Belgrade, aleksandar.milojkovic@gmail.com
BOJAN PAVKOVI
Military Technical Institute, Rocket Armament Sector, Belgrade, bjnpav@gmail.com
ZORAN NOVAKOVI
Military Technical Institute, Aeronautical Sector, Belgrade, novakoviczoca@gmail.com

Abstract: A case-bonded solid propellant rocket grain is subjected to many stress-inducing loads during the service life, due
to temperature, extended polymerization, transportation, vibration, acceleration, aerodynamic heating etc. and finally due to
the operating pressure in the rocket motor. Composite propellant is a viscoelastic material whose mechanical properties
highly depend on temperature and strain rate and sometimes may vary in the range of use of rocket motors for several orders
of magnitude. Relationships between stresses and strains are much more complex than for the elastic material. Therefore, the
stress and strain analysis and estimation of safety factor under the action of each individual load is quite complex and
sometimes impossible. An even greater problem occurs when multiple different types of loads act simultaneously. An extreme
case occurs in the moment of rocket motor ignition. Then, the very fast load act due to the pressure, at which the propellant
tensile strength is high. At the same time, the very slow thermal load act on the grain, and in these conditions the propellant
tensile strength is low. The vector addition of the stresses and strains due to different loads is not possible. It is also not
possible to define the equivalent or resultant values of tensile strength and allowable strain. The principle of adding the
current damages is applied here, similar to the model of cumulative damage. In addition, due to the large variations in
mechanical properties of the rocket propellant, it is necessary to apply the methods of mathematical statistics for assessing the
propellant grain reliability and service life.
Keywords: Propellant Grain, Viscoelasticity, Stress, Tensile strength, Damage.

1. INTRODUCTION
In the theory of elasticity, under the assumption of small
strains [1], in environmental conditions within the normal
temperature range of use, under the uniaxial extension, there
is a linear ratio between stresses and strains. It is also
considered that the value of ultimate tensile strength is
approximately constant, as well as the corresponding value of
allowable strain.

Sometimes, the safety factor may be also defined as the


ratio between allowable strain ( m ) and equivalent strain

( 0 ) defined as the vector sum of individual strains due


to different loads. Then we have:

(t ) =

or (t ) = m
e
e

(1)

The safety factor of an elastic body is defined as the ratio


between the constant value of the material ultimate strength
( m ) and maximum equivalent stress ( 0 ) , which is a

This definition is quite simple because it implies that


various types of loads and the manners like they act onto
the elastic body, have no effect on the ultimate strength or
allowable strain of elastic material ( m , m ) , that remain

resultant of stresses caused by different loads.

nearly constant in all conditions.

283

OTEH2016

RELIABILITYOFSOLIDROCKETPROPELLANTGRAINUNDERSIMULTANEOUSACTIONOFMULTIPLETYPESOFLOADS

The maximum equivalent stress ( 0 ) , or corresponding

ratio of two length features, elongation

maximum equivalent strain ( 0 ) are defined as the largest

length ( l0 ) of a body, is dimensionless. Hence, the strain

resultant of different simultaneous stresses (or strains),


induced under the various load action on the body, also
regardless the mode of load action.

rate dimension ( R = d / dt ) is equal to the inverse time.

These two values can also differ in several orders of


magnitude. A good advantage for the structural analysis of
viscoelastic bodies, like it is propellant grain, is that there
is a fortunate, although essentially unexplained [2]
association between temperature and strain rate.
For these reasons it is not possible to estimate the safety
factor for viscoelastic rocket propellant grain in the same
manner as for an elastic body. At first, it is not possible to
define the equivalent tensile strength, nor is it possible to
define the equivalent stress generated in the viscoelastic
body under the simultaneous action of different loads.
For viscoelastic body it is necessary to apply a different
principle. Each individual load has to be considered
separately. Its influence produces some current damage,
presented as a ratio between the induced stress (and strain)
and corresponding ultimate strength (or strain). This ratio
is a relative value less than unity, which occupies a part of
the propellant grain capacity to withstand the fracture.
Total current damage of the solid propellant grain as a
viscoelastic structure can be finally obtained as the sum of
all individual current damages under the mechanical
influences of different loads that simultaneously act on the
grain. Finally, the safety factor is determined as the
reciprocal of the total current damage.
The principle is basically simple, but the analysis may
complicate when environmental loads are changing over
time. This paper presents an example of such an analysis.

2. COMPOSITE ROCKET PROPELLANT


ULTIMATE STRENGTH VARIABILITY
2.1 Viscoelastic mechanical properties
presentation
Mechanical properties of composite rocket propellant are
temperature/and strain rate-dependent. Strain ( ) , as a

Since the effect of strain rate is basically the time effect, it


finally can be said that propellant mechanical properties
depend on temperature and time.
Fortunately, a good circumstance is that there is a clear
correlation between these two different influences, time
and temperature [1-4]. Changing the strain rate (time
influence) for a certain value ( R ) at arbitrary constant
temperature (T = const ) may have the same effect as a
certain temperature change ( T ) , when the strain rate is
constant ( ( R = const ) .
The most commonly used term for connection the time
and temperature influence is Williams-Landed-Ferry
(WLF) equation [5, 6]:
log aT =

C1 (T T0 )
C2 + T T0

(2)

Log at is usually called time-temperature shift factor.


The significance of this feature can be seen from the
Fig.1. The tensile strength vs. strain rate regression curves
on different temperatures are shown for a HTPB
composite rocket propellant [7-9]:
2,0
1,8

Tensile strength, log m(T0/T)

For a viscoelastic material, like composite solid rocket


propellant, the situation is completely different, because
its mechanical properties are highly dependent on
temperature, and vary by several orders of magnitude
within a temperature range of rocket motor use. In
addition, mechanical properties of the propellant also
highly depend on strain rate [1-3]. Since different loads
produce different strain rates, the propellant mechanical
properties will vary with different loads. That means, for
example, that if the very slow temperature load acts onto
the propellant grain in the rocket motor, it produce low
strain rate, and also very low propellant tensile strength, in
comparison with the case when another fast load acts.
When the rocket motor works, pressure during the ignition
and combustion is high, and the operating loads are fast.
They produce high strain rate and the propellant tensile
strength is entirely different and very high.

( l ) and basic

-50 C

1,6
1,4

-30
-10

1,2

10
1,0
0,8
0,6

30
50

Time, log t = log (1/R)

Figure 1. Tensile strength vs. temperature and strain rate


In the temperature range from -60oC to +50oC, at 10oC
increments, uniaxial tension tests were performed [3, 8,
10] at several different tester cross-head speeds in the
range between 0.2 and 1000 mm/min. For each constant
temperature, a number of test points, equal to the number
of different strain rates were obtained. These points form a
curve for each temperature. Thus, regression curves for
different temperatures were obtained.
These curves are approximately equidistant. Moving them
horizontally, along the abscissa (time axis) to meet the
reference line, they will overlap creating a single common
curve termed "master curve" [9]. It is necessary to choose
an arbitrary temperature to be a reference temperature. It
is usually the standard ambient temperature +20oC.

284

RELIABILITYOFSOLIDROCKETPROPELLANTGRAINUNDERSIMULTANEOUSACTIONOFMULTIPLETYPESOFLOADS

Moving along the timeline the curves that correspond to


different arbitrary temperatures (T) will overlap. It shows
the correlation between temperature and time. The value
of displacement is equal to the log aT(T).
It is customary (in all cited literature) to prepare the scale of
the abscissa to represent both influences, temperature and
time. Instead of reciprocal of the strain rate ( t = 1 R ) , which
has the dimension of real time, a single variable reduced
time is introduced ( = 1 ( RaT ) = t aT ) , which is a
combination of temperature and time (strain rate).
Finally, the tensile strength of the composite propellant as
viscoelastic material, can be displayed in the form of
master curve which includes the strength dependence in
all load conditions (Fig.2).

OTEH2016

In another type of load, due to the pressure during the


rocket motor ignition, the pressure is rapidly growing up
and the strain rate is high.
Different strain rates affect the size of the propellant
tensile strength.
Difference between the influences of these two values of
strain rates can be seen in the case of tangential strain
components in the hollow tube grain channel (Fig.4).
For the sake of comparison, let us consider the equations
obtained in the elastic analysis, for tangential strain in the
grain channel, due to the influence of temperature and the
effects of increasing pressure during the rocket motor
ignition.

(daN/cm )

2,2

1
T

log m 0 = 1.11 0.13 log


T

R aT

2,0
1,8

Tensile strength, log m T0/T

1,6

1,4
1,2
1,0
0,8

Figure 4. Hollow tube grain

0,6

In the case of pressure loading, the elastic equation for the


tangential strain in the grain channel [1, 4, 68, 10] has
the following form (3):

0,4
0,2

-7

-6

-5

-4

-3

-2

-1

Reduced time,

log (1/RaT) (s)

( p ) =

Figure 2. Tensile strength master curve


Since the elastic forces in polymers are proportional to
absolute temperature [1, 3, 4], tensile strength has to be
normalized by factor (T0 / T ) .

2.2 Tensile strength dependence on the type of load


Propellant grain (2) (Fig.3), cast directly into the rocket
motor chamber (1), suffers continuous stress due to the
effects of temperature difference between the casting and
ambient temperature. Strain rate after casting, made by the
temperature difference is quite small, because the time
period of propellant grain cooling down after curing, up to
the ambient temperature, is considerably long.

(1 + ) p
2(1 ) K M 2 2(1 ) (3)
1 2
1
E

M 2 1

For the temperature loading, the following expression is


taken from the same references:

(T ) = (1 + ) T 2 K M 2 (1 ) T

(4)

Titles and explanations of all the features in these


expressions are given in the Table 1. The features and
can be obtained using the expressions:
1 c
= 1 + (1 2 ) M 2 + E b ( M 2 1)
1 +
Ec h

(1 ) c (1 c )

(5)

(6)

For arbitrarily selected propellant grain (Fig.4) we can


adopt the following geometric properties and physical
properties of the propellant [11]:
Table 1. Propellant grain properties

mm
mm
mm

a
M

Figure 3. Rocket motor with cast composite


propellant grain
285

50
25
2,0
2,0
2,0

Outer radius of the grain


Inner radius of the grain
Outer/inner radius ratio
Case thickness
Stress concentration in the star
perforated grain

RELIABILITYOFSOLIDROCKETPROPELLANTGRAINUNDERSIMULTANEOUSACTIONOFMULTIPLETYPESOFLOADS

T0

293

Reference temperature

0,5

Propellant Poissons ratio

0,3

Case Poissons ratio

daN
cm 2

600

Propellant modulus

Ec

daN
cm 2

2,1 10

C 1 0,1110 4

4.0

C2

127,0

4(T T0 )
4 (233 293)
=
= 3,582 (13)
127 + T T0 127 + 233 293

Reduced time on the abscissa of the tensile strength


master curve (Fig.2):
log = log 1
R aT

Case modulus of elasticity

C 1 0,93 10 4

C1

log aT =

OTEH2016

= 0, 233 3,582 = 3,815


1
log ( p) = log

R ( p) aT

Propellant coefficient of thermal


expansion
Case coefficient of thermal
expansion

1
log (T ) = log
R (T ) aT

Time-temperature shift factor


coefficients

Since the load conditions were not known in advance, the


propellant modulus value in the Table 1, which depends
on the strain rate, had to be approximately determined
through several iterations.

Replacing the values from the Table 1 into the expressions


(5) and (6) we have = 1,013 and = 0,383 104 . From
the expressions (3) and (4) we get very similar absolute
values and the same directions for the strains due to the
effects of pressure and temperature:
(7)

(T ) 0, 0175

(8)

= 6,398 3,582 = 2,816

There is a significant difference between the two reduced


times, under the effects of temperature and pressure.
Tensile strength can be calculated from the master curve
equation in the Fig.2:
log m

Let us consider the tangential strain in the grain channel


on the lowest extreme temperature of the rocket motor
usage (-40oC). During the rocket motor ignition, the
operating pressure of approximately 200 bar is achieved in
about 10 ms. In addition, the propellant grain is cast at a
temperature of 65oC. After curing at this temperature, the
grain was cooled down for about 12 hours up to the
ambient temperature +20oC.

( p) 0, 0171

(14)

T0
= 1,11 0,13 log 1
T
R aT

(15)

log m ( p) 293 = 1,11 0,13 (3,815) = 1, 606


233
log m (T ) 293 = 1,11 0,13 (2,816) = 0, 744
233

m ( p ) = 32,10 daN2 ; m (T ) = 4, 41 daN2


cm

cm

(16)

The expression (16) shows a big difference between the


propellant tensile strengths in the cases when the pressure
and temperature loads act at the same time. In some other
circumstances, this difference may be even greater.

3. PROPELLANT GRAIN RELIABILITY

CRITERIA
3.1. Effects of a number of different loads

Strain rate is defined as the ratio of strain and the time


needed to achieve the strain:
R = d
dt t

(9)

Strain rates during rocket motor ignition and during the


propellant grain cooling down after curing are shown in
expressions (10) and (11):
R( p) =

( p)
t ( p)

0, 0171
= 1, 71 s 1
10 103

(T )

(10)

0, 0175
R (T ) =
=
= 0, 4 106 s 1
12 h
t (T )

(11)

log R ( p) = log(1, 71) = 0, 233


log R (T ) = log(0, 4 106 ) = 6,398

(12)

It can be concluded that real stresses and strains in a


viscoelastic body depend on the ambient conditions as
well as on the type of load that act on the body.
The safety factor cant be calculated in the same way as
for an elastic body. For an elastic material the tensile
strength is treated as a constant value. Then, the safety
factor is defined as the ratio of the tensile strength ( m )
to the maximum real equivalent stress ( 0 ), as a resultant
of the simultaneous action of different loads.
For a viscoelastic body, if several different types of time
dependent loads L1 (t ), L2 (t ), L3 (t )... act on the body at the
same time, at different loading rates, they produce several
correspondent different stresses, 1 (t ), 2 (t ), 3 (t )... The
strain rates are also different, as well as tensile strengths
that correspond to different loads m1 (t ), m 2 (t ), m3 (t )...

Time-temperature shift factor for the rocket motor lowest


working temperature (-40oC):

It is not possible to consider these loads together. For that

286

OTEH2016

RELIABILITYOFSOLIDROCKETPROPELLANTGRAINUNDERSIMULTANEOUSACTIONOFMULTIPLETYPESOFLOADS

The effect of a number of simultaneous loads can be


represented as the sum of the same number of damage
increments. The same principle is used in the Miners
cumulative damage law [6, 12]:
d (t ) = d1 (t ) + d 2 (t ) + d3 (t ) + ...
(t ) 2 (t ) 3 (t )
d (t ) = 1
+
+
+ ...
m1 (t ) m 2 (t ) m3 (t )

(17)

Total current damage is equal to the sum of the individual


damages. And finally, according to the rules in the theory
of elasticity, the safety factor of the body is equal to the
reciprocal of the total damage:

(t ) = 1

d (t )

(18)

The example with two main, but extremely different loads


in rocket motor, pressure and temperature, shows that
unlike elastic body, for the structural analysis of a
viscoelastic rocket motor propellant grain, it is necessary
to make all functional dependences of its viscoelastic
mechanical properties on temperature and strain rate.
As for mechanical properties of rocket propellant as a
viscoelastic material, in addition to the features of tensile
strength and allowable strain, natural aging of viscoelastic
material must be considered [11, 1316], as well as the
impact of cumulative damage [9, 17, 18].

3.2. Variability of the propellant grain safety


factor
The term damage, in classical terminology, indicates that
the body has suffered a permanent injury. However, when
we talk about "current damage" in the rocket motor
propellant grain, the situation is different. This is not a
term that means irreversible damage. "Current damage" is
a part of the total resistance of the grain that is currently
occupied by load action. It depends on the current loads
and the external forces acting on the grain. Since the
environmental loads, acting on the propellant grain, vary
over the time, the current damage is a time-variable.

first due to acceleration of the aircraft, then due to


aerodynamic heating, also due to vibrations of the wings,
and finally, the most intensive load due to the temperature
difference. Each of these loads causes their partial current
damage. Total current damage is:
d (t ) = d1 (t ) + d 2 (t ) + d3 (t ) + d 4 (t )

(19)

The first three loads terminate and the stresses caused by


them relax and return to zero. Then, the value of total
current damage decreases:
d (t ) d 4 (t )

(20)

This example explains usage the term current damage,


which depend on the intensity of current loads. If the
value of current damage, theoretically was greater than 1,
this would mean that the grain would probably fail. On the
other hand, if it is less than 1, this would mean that there
will be no grain failure. When the first three loads
disappear, the remaining current damage will be only the
part due to environmental temperature d4(t). Some
accumulated damage might remain, but it is almost
negligible if the loads were acting shortly, and sufficiently
small not to exceed the critical values of the safety factor.
Current damage differs in comparison with cumulative
damage, which represents all accumulated damages due
to the effects of previous loads, especially cyclic loads of
high frequency [17, 19].
Current damage is the time dependent feature, even when
only one environmental load acts onto the propellant
grain. This is particularly evident when the cyclic loads
are acting. For example, from the time the propellant grain
is cured, until it has burned away, the ambient temperature
oscillates over every single day, making a cyclic stresses
and strains in the grain, according to approximately sinus
law [7, 8]. During the year, there is another sinus law with
a seasonal period of oscillation.
This damage originates due to the large difference of
coefficients of thermal expansion between propellant and
rocket motor case. There are no stresses in the propellant
during casting into the rocket motor chamber at a higher
temperature (for example +65oC). After curing and
cooling down, the stresses arise due to the difference
between ambient temperature and zero stress temperature.

Some loads may act for a while, and then they disappear
and their influence stops. It is possible for a part of that
influence to remain after the the load action, until the
stress in the viscoelastic body completely relaxes.
In reference [9] the following example was discussed: a
rocket is mounted under the wings of the plane, but it
returned to its base and the rocket was not launched. What
happened with the rocket motor? It suffered stresses at
287

0,3

Current damage, d (-)

reason, the effects of individual loads have to be analyzed


separately, using the concept of convolution [6], which is
valid for linear-viscoelastic materials. Each of the
individual loads creates a certain current damage. Each
individual current damage represents the ratio between
real stress and its appropriate ultimate stress. So, the first
current damage is equal to the ratio between the stress
1 (t ) and the corresponding first tensile strength m1 (t ) .
The same principle applies to the second load, and to all
other loads. As long as the sum of the individual current
damages is less than unit, we can believe that there will be
no grain failure.

0,2

0,1

0,0

10

11

12

Time, t (months)

Figure 5. Annual current damage distribution in the


rocket propellant grain due to environmental temperature

OTEH2016

RELIABILITYOFSOLIDROCKETPROPELLANTGRAINUNDERSIMULTANEOUSACTIONOFMULTIPLETYPESOFLOADS

For a real rocket motor propellant grain, whose basic


properties are defined in Table 1, the annual current
damage distribution is calculated, due to the effects of
environmental temperature (Fig.5). The current damage is
caused due to daily and seasonal temperature changes.
Mathematical model for temperature distribution is made
under assumption that periodic jumps of temperature
appear in regular time intervals. Therefore, in the Fig.5
there are the corresponding jumps of the current damage.
Any additional load causes additional damage. According
to this model for the linear-viscoelastic material, the total
damage is equal to the sum of individual damages.
Current damage increases over time even the loads are
constant, because ultimate properties of the propellant
degrade due to the aging and cumulative damage.
Therefore, the propellant grain safety factor must be
considered as a time-dependent feature for two main
reasons: due to the load variations and also due to the
ultimate strength and allowable strain deterioration.

3.3. Probabilistic approach to the failure criteria


Fig.5 shows large variations of the current damage only
due to the temperature load. In this case, the current
damage variations are not critical because they are much
less than unity. However, when multiple loads operate
simultaneously, and all of them vary extensively, the total
value of current damage vary even more. Then, it is
almost impossible to accurately define its intensity.
In the case of temperature load, Heller and Zibdeh [15, 17,
20] recommended probabilistic methodology for
evaluation the propellant grain reliability. Based on this
model, it is possible to define the similar model for the
action of simultaneous loads.
For example, let us consider the case of tangential stress
( ) in the propellant grain channel. Probability of a
grain failure due to the first single load ( 1 ) may be the
defined simply:
Pf 1 = P ( m1 1 )

3.4. Propellant Grain Reliability


After termination the short-term or concentrated loads,
stresses in the propellant grain relatively quickly relax,
without consequences for the grain. Unlike them,
continuous cyclic loads of high frequencies can cause
grain failure due to the cumulative damage [9, 17-19].
The impact of cyclic loads can be very small, especially
when probability of failure is much lower than unit.
However, there is always the possibility of cracking or
some kind of failure due to many times repeated loads.
Let us suppose that all the single probabilities of failure in
expression (23) are due to the short-term loads, except
one, which is caused by a cyclic load. All other
probabilities will disappear with the end of the load,
except the last one. At the end of each load cycle, such as
daily fluctuations in temperature, the reliability is equal to
the probability of the opposite event to the failure.
Ri = 1 ( Pf )i

To calculate the grain reliability at the end of random


cycle, it is necessary to get the probabilities at the end of
each previous temperature cycle. After ( n ) cycles, using
the multiplication rule of dependent probabilities, the
reliability is equal to an n - fold production of single
probabilities of surviving:
Rn = (1 Pf1 ) (1 Pf2 )

(25)

(1 P )

(26)

fi

i =1

For the propellant grain defined in Table 1, the final


results of structural analysis, probability of failure and
reliability is shown in Fig.6. On the probability curve (1)
periodic irregularities can be seen that are the result of the
mathematical model of temperature disturbances. The
smooth reliability curve (2) is much clearer in relation to
the value of safety factor, which is equal to the reciprocal
of the unclear variable value of current damage (Fig.5).

(21)
1,0

Reliability, Probability of failure

(22)

Finally, the simplest assumption is that the events 1,


2,,n are independent, and probability of grain failure is
the sum of individual probabilities:
Pf = Pf 1 + Pf 2 + ...Pfn

(1 Pfn )

i =n

Rn =

Then, in the case of another, second load ( 2 ) we have:


Pf 2 = P ( m 2 2 ) , etc.

(24)

(23)

Although this model is not quite true, it may be acceptable


in the absence of better one. Further correction of this
model is a matter of experience or research in the field of
structural analysis.
This approach is based on statistical measures of all the
mechanical properties of the propellant as a viscoelastic
material, and also on determination the statistical
parameters for all the real loads and stresses. It is suitable
to assume normal statistical distributions.

0,8

Probability of failure

0,6

0,4

0,2

Reliability

0,0
2

10

12

Time (Year)

Figure 6. Reliability and probability of the grain failure


It is interesting that the grain reliability falls below the
allowable value much earlier before the probability of
failure reaches high values.

288

RELIABILITYOFSOLIDROCKETPROPELLANTGRAINUNDERSIMULTANEOUSACTIONOFMULTIPLETYPESOFLOADS

4. CONCLUSION
This paper discusses the specifics of structural analysis of
solid rocket propellant grains as viscoelastic materials.
The mechanical properties of the composite rocket
propellant depend on many different factors, but most on
the temperature and strain rate. Therefore, determination
of the propellant grain safety factor and reliability is
different in comparison with elastic bodies. This
procedure is quite complex and sometimes impossible. In
engineering practice, due to the complexity, this problem
is typically bypassed or ignored. Some empirical estimates
are usually made, only on the basis of propellant test data
made only in standard conditions. These estimates are
usually inaccurate because they are not on a reliable basis.
Testing the propellant mechanical properties only at the
standard conditions doesnt have any practical application
for the structural analysis, except perhaps as information
in order to compare the properties of different materials.
The structural analysis of viscoelastic materials is
complex and involves a whole series of theoretical and
experimental activities. This is especially evident when
different loads simultaneously act onto a viscoelastic
body. Their effects are also different because of the
specific reactions of material, which is substantially
different from the elastic material.
This paper is an example how the viscoelastic properties
change due to the type and rate of loading. A modified
classic model for the safety factor analysis is presented,
under the multiple action of two or more environmental
loads, through the concept of linear sum of current
damages and probabilistic approach to the evaluation of
reliability.

REFERENCES
[1] Williams,M.L.,
Blatz,P.J.,
Schapery,R.A.:
Fundamental Studies Relating to Systems Analysis of
Solid Propellants, Final report GALCIT 101,
Guggenheim Aero. Lab., Pasadena, Calif. (1961).
[2] Williams,M.L.: Structural Analysis of Viscoelastic
Materials, AIAA Jour., California Ins. of Tecnology,
Pasadena, California, May (1964) 785-798.
[3] Landel,R.F., Smith,T.L.: Viscoelastic Properties of
Rubberlike Composite Propellants and Filled
Elastomers, ARS J, Vol.31, No5, (1960) 599-608.
[4] Fitzgerald,J.E., Hufferd,W.L.: Handbook for the
Engineering Structural Analysis of solid Propellants,
CPIA publication 214 (1971).
[5] Williams,M.L.,
Landel,R.F.,
Ferry,J.D.:
The
Temperature Dependence of Relaxation Mechanisms
in Amorphous Polymers and Other Glass-forming
Liquids, Journal of American Chem. Soc., 1955, 77
(14),pp 37013707, DOI: 10.1021/ja 01619a008.
[6] Solid propellant grain structural integrity analysis,
NASA Space Vehicle Design Crit. SP-8073, (1973).

OTEH2016

[7] Gligorijevi,N.: Strukturna analiza pogonskih


punjenja raketnih motora sa vrstim gorivom,
Vojnotehniki Institut Beograd, (2013), ISSN 18203418, ISBN 978-86-81123-59-1.
[8] Gligorijevi,N.: Istraivanje pouzdanosti i veka
upotrebe raketnih motora sa vrstom pogonskom
materijom (Solid propellant rocket motor reliability
and service life research), Ph.D. Dissertation,
Military Academy, Belgrade, Serbia (2010).
[9] Gligorijevi,N., ivkovi,S., Rodi,V., Suboti,S.,
Gligorijevi,I.: Effect of Cumulative Damage on
Rocket Motor Service Life, J ENERG MATER,
Volume 33, Issue 4, DOI: 10.1080 07370652.
2014.970245. (2015).
[10] ,..: -
, , (1972).
[11] Gligorijevi,N.,
Rodi,V.,
ivkovi,S.B.
Pavkovi,M., Nikoli,S., Kozomara,S.: Suboti,
Mechanical Characterization of Composite Solid
Rocket Propellant based on hydroxy-terminated
polybutadiene,
Chemical
Industry,
2015.
DOI:10.2298/HEMIND150217067G.
[12] Miner,M.A.: Cumulative Damage in Fatigue, J APPL
MECH-T ASME, Vol.12 (1945) 159-164.
[13] Gligorijevi,N., Rodi,V., Jeremi,R., ivkovi,S.,
Suboti,S.: Structural Analysis Procedure for a Case
Bonded Solid Rocket Propellant Grain, Scientific
Technical Review, Vol.61, No.1 (2011) 1-9.
[14] Cerri,S., Bohn,A.M., Menke,K., Galfetti L.: Ageing
Behavior of HTPB Based Rocket Propellant
Formulations, CENT EUR J ENERG MAT, 6(2)
(2009) 149-165.
[15] Heller,R.., Singh,.P.: Thermal Storage Life of
Solid-Propellant Motors, Journal of SPACECRAFT
and ROCKETS, Vol.20, No2, (1983) 144-149.
[16] Gligorijevi,N., ivkovi,S., Suboti,S., Pavkovi,B.,
Nikoli,M., Kozomara,S., Rodi,V.: Mechanical
Properties of HTPB Propellants in the Initial Period
of Service Life, Scientific Technical Review, Vol.64,
No 4 (2014) 1-13.
[17] Heller,R.., Singh,.P. Zibdeh,H.: Environmental
Effects on Cumulative Damage in Rocket Motors, J
SPACECRAFT ROCKETS, Vol.22, No2, (1985)
149-155.
[18] Liu,C.T.: Cumulative Damage and Crack Growth in
Solid Propellant, Media Pentagon Report No
A486323 (1997).
[19] Tormey,J.F., Britton,S.C.: 1963. Effect of Cyclic
Loading on Solid Propellant Grain Structures, AIAA
Journal, No8, Vol.1, pp.1763-1770.
[20] Zibdeh,H.S., Heller,R.A.: Rocket Motor Service Life
Calculations Based on the First Passage Method, J
SPACECRAFT ROCKETS, Vol.26, No4 (1989)
279-284.

289

AN EXAMPLE OF PROPELLANT GRAIN STRUCTURAL ANALYSIS


UNDER THE THERMAL AND ACCELERATION LOADS
SAA ANTONOVI
Military Technical Institute, Rocket Armament Sector, Belgrade, saleantonovic82@gmail.com
NIKOLA GLIGORIJEVI
Military Technical Institute, Rocket Armament Sector, Belgrade, nikola.gligorijevic@gmail.com
ALEKSANDAR MILOJKOVI
Military Technical Institute, Sector for Materials and Protection, Belgrade, aleksandar.milojkovic@gmail.com
SREDOJE SUBOTI
Military Technical Institute, Rocket Armament Sector, Belgrade, sredoje.subotic@gmail.com
SAA IVKOVI
Military Technical Institute, Rocket Armament Sector, Belgrade, sasavite@yahoo.com
BOJAN PAVKOVI
Military Technical Institute, Rocket Armament Sector, Belgrade, bjnpav@gmail.com

Abstract: In the design phase of a rocket motor propellant grain for an anti-armor artillery rocket, the problem of high
stresses and strains due to temperature and acceleration has been considered. Complete mechanical characterization
for a new composite propellant composition is very expensive and takes a long time. Therefore it is usually not done
before the propellant composition completely meets the ballistic requirements of the new rocket motor. In the initial
stage, the measurements of the mechanical properties are performed only in standard conditions, but these data are not
sufficient for reliable analysis. Composite rocket propellant is a viscoelastic material whose mechanical properties
depend on temperature and strain rate and vary in the range of several orders of magnitude. The lack of data required
for a reliable analysis is compensated by using the known data of a similar composite propellant composition. In the
MTI database, for this similar propellant composition there exists a data colection for the complete mechanical
characterization. A method of comparison and extrapolation has been used. Finally, quasi-viscoelastic analysis was
performed for different design solutions, using the finite element method.
Keywords: Propellant Grain, Thermal Load, Acceleration, Viscoelasticity, Mechanical Characterization.

1. INTRODUCTION
During design of a rocket motor, ballistic requirements
has led to a solution of a free standing propellant grain
with the hollow tube channel of variable diameter (Fig.1)
and rather high length to diameter ratio ( L / D 15 ).

Figure 1. Propellant grain


Very lengthy grain (l>2000 mm) and relatively high
coefficient of thermal expansion of the composite rocket
propellant produce a considerably large length difference
( l 15mm ) between the limit values of the rocket
motor temperature range of use (-30, +50oC).
Reliable rocket motor work and its requested service life
require the designer to ensure the correct position of the
propellant grain in the rocket motor chamber in all storage

conditions, and also during handling and transport, until


the moment of use. Therefore, it is necessary to prevent
possibility of any free displacement between the
propellant grain and the other elements of the motor.
It means that the propellant grain has to be securely
fastened in all environmental conditions, with all its
variable lengths, depending on the outside temperature. In
addition, the grain should not be too stressed to avoid
propellant cracking.
For better transparency (Fig.2) the very long propellant
grain is shown in shortened form with two cross-sections
lengthwise. Design solution implies the reliance of the
grain front side (left) onto the rocket motor flange (pos.1),
while the other (right) front side of the grain rests upon the
compensator of temperature dilatation (pos.2), which is
expected to have a sufficient flexibility to accept major
changes in the grain length.
Propellant grain in the rocket motor suffers surface
pressure on the front sides. When the propellant grain is
shortest, at the lowest working temperature -30oC, this

290

ANEXAMPLEOFPROPELLANTGRAINSTRUCTURALANALYSISUNDERTHETHERMALANDACCELERATIONLOADS

surface pressure is the lowest. However, the length of the


grain increases when the temperature rises up to the
maximum working value +50oC. Then, also the surface
pressure increases on the front of the grain. One of the
tasks of structural analysis was to determine whether this
surface pressure is too large, resulting in possible damage
of the propellant.

OTEH2016

At this stage, the designer does not have all necessary data
on the mechanical properties of propellant, because he still
considers different possible propellant compositions.
According to established procedure in the design phase,
mechanical properties of each propellant composition are
tested at tensile tester only in standard conditions (+ 20oC, 50
mm/min) [6, 11]. Sometimes, standard rate tensile tests (50
mm/min) are also carried out at the boundary temperatures (40, +50) in the range of the rocket motor use.
Usually, at this stage of design a complete mechanical
characterization is not carried out, because it includes a
serious process of tensile tests at a number of different test
modes, in order to determine mechanical properties of the
propellant, depending on strain rate and temperature in all
possible regimes.

Figure 2. Propellant grain position in the rocket motor


Furthermore, another important load occurs at the
beginning of the rocket flight. In addition to the surface
pressure developed due to the very slow temperature load
[1-4], there is a very fast and strong concentrated load due
to the axial acceleration of the rocket.
Structural analysis of the propellant grain as a viscoelastic
body is essentially different from the structural analysis of
an elastic body [1-3, 5, 6]. In the preliminary phase, in the
first approximation it is sufficient to make a quasi-elastic
analysis, modeled as elastic, only taking into account the
dependence of the propellant mechanical properties on
temperature and strain rate.
However, this technique is not always possible. For
example, when a viscoelastic body is under action of
different loads at the same time [7, 8], there is a problem
how to define failure criteria, especially if the loads are
acting in completely different ways. Mechanical
properties of the propellant as a viscoelastic material,
under the influence of various loads, may vary more than
2-3 orders of magnitude [1, 3, 9].
These two loads, one due to the temperature and the other
due to the acceleration, act on the propellant grain by
different load rates, which mean by different strain rates.
For each different load alone, and its strain rate, ultimate
strength is different. It is not possible to define the
equivalent stress generated in the propellant grain under
the simultaneous action of two different loads [7], nor is it
possible to define equivalent tensile strength. Each load
has to be considered separately, to determine the feature
named current damage [7] defined as a ratio of real
stress to the ultimate strength that corresponds to the real
stress. Then, individual values of different current
damages can be finally simply summarized to get the total
current damage, using the convolution principle for linearviscoelastic materials [2, 10]. The reciprocal of the total
current damage is equal to the safety factor [7, 10, 11].
In the design phase of the rocket motor, especially
propellant grain, designer usually considers several
possible propellant compositions, in order to achieve the
proper ballistic properties of the propellant and
corresponding shape and dimensions of the grain.

Tensile tests in standard conditions are not sufficient for


structural analysis. They are used for a quick assessment
of the propellant and comparison with other propellants.
The only possibility for a grain designer is to improvise in
a way a preliminary analysis and only approximately
estimate the reliability of propellant grain.
The first step in the preliminary analysis is to make quick
calculations, following the principle for an elastic body [2,
3, 12]. Using the standard data and the finite element
method, the analyst qualitatively defines potential critical
zones in the grain, with the greatest stresses and strains.
In the second step, when there are still no data on the
complete mechanical characterization, it is necessary to
compare approximately standard data with complete data
of a similar arbitrary propellant from the database.
Comparing with the data for the chosen propellant and
shifting the mechanical properties data along the timetemperature axis, the analyzer can approximately define
dependence of the new propellant mechanical properties
in a wide range of temperatures and strain rates in order to
estimate the reliability of the grain in critical zones.
This estimate is very approximate, but in the absence of
complete mechanical characterization it is sufficient.
When the proposed propellant grain in the design phase is
estimated to be sufficiently reliable, the detailed
mechanical characterization of the propellant should be
done in order to completely confirm the theoretical
reliability. This is the phase that follows later.
This paper describes one such approach to structural
analysis, in order to assess the quality of the proposed
propellant grain design.

2. INITIAL EVALUATION OF MECHANICAL


PROPERTIES
2.1 Viscoelastic mechanical properties
presentation
Uniaxial constant rate testing at the three characteristic
temperatures were done by tensile tester Instron-1122, in
standard rate conditions (50 mm/min). The following
diagram is produced in Fig.3:

291

OTEH2016

ANEXAMPLEOFPROPELLANTGRAINSTRUCTURALANALYSISUNDERTHETHERMALANDACCELERATIONLOADS

2.

700

Ultimate strain master curve

600
0

m (%) =

500

+1,366 2
2
7,119

6, 663 + 10,514 e
;

= t

aT

Stress, (N/cm )

-40 C

400

Time-temperature shift factor

3.

300

log aT =

+20 C
200

0,00

0,05

0,10

log E

0,15

Strain, ( - )

Figure 3. Standard rate mechanical properties


Data from the diagram in Fig.3 are shown in Table 1:
Table 1. Mech. properties of the propellant in standard
rate conditions (crosshead speed 50 mm/min)
T

C
-40
+20
+50

m
%
10,41
10,16
9,14

E0
2

daN/cm
50,71
19,98
15,38

daN/cm2
1066,18
335,33
281,68

The data in Table 1 are not sufficient for structural


analysis. Mechanical properties of the propellant as a
viscoelastic material are strongly dependent on the strain
rate and temperature and it is necessary to carry out
uniaxial tensile tests on a large number of different crosshead speeds of the tensile tester and also a large number of
different temperatures in the range of rocket motor use.
The paper [8] shows the results of complete uniaxial
mechanical characterization of one of the HTPB
composite propellants from the MTI database. Uniaxial
tensile tests were carried out at different temperatures (ten
degree steps in the range between -60oC and +50oC) and
twelve different strain rates, using constant crosshead
speeds in the range between 0.2 and 1000 mm/min.

The propellant from the database is similar in structure to


the propellant which is discussed herein and is therefore
selected for comparison in order to make a sufficiently
acceptable preliminary structural analysis.
Such extensive mechanical characterization of a solid
propellant is made occasionally, for certain composition
that is suitable for the propellant grain production in terms
of interior ballistics of rocket motor and also confirmed
during the preliminary structural analysis.

It is assumed that the time-temperature shift factor aT(T)


has the same distribution as for the propellant from the
database. This assumption is acceptable because the
values of aT(T) are very similar for comparable polymer
compositions [13].
Reduced time is defined as:

= t =
aT

R (-) aT -

1
R aT

(5)

strain rate
time-temperature shift factor

In the standard test, with JANNAF specimen [2, 8] and


crosshead speed of the tensile tester v = 50 mm / min ,
strain rate is:

l0

R=

v (mm/min)
= 50 = 0.012148
60 l0 (mm) 60 68, 6

basic length of propellant specimen


log = log R log aT

(6)

(7)

On the basis of these three measurements with the


standard cross-head speed of the tester, three points are
obtained to be inserted into the diagram along with the
master curve for the propellant from database. These
values are shown in Table 2.
Table 2. Mechanical properties of the new propellant and
corresponding reduced times

Tensile strength master curve:


T0
= 1,11 0,129 log t
T
aT

(4)

When the same reference temperature (20oC) is adopted


for determination a master curve for new propellant, the
three measured points can be easily plotted into the
diagram mechanical property vs. reduced time.

From the above mentioned work [8] the following results


and dependencies were taken:

log m

T0
= 2, 20 0,138 log t
T
aT

If the three measured points for the new propellant are


entered into the existing diagrams of the propellant from
the database, approximate expressions for description the
properties of the new propellant can be made.

Master curves for the three most important mechanical


properties: tensile strength, initial modulus and ultimate
strain are made showing their dependence on the both
time and temperature.

1.

(3)

+50 C

4, 0 (T 20)
127 + T 20

Initial modulus master curve

4.

100

(2)

(1)

292

log

-40
+20
+50

-1,666
1,9155
2,680

T
log m 0
T

daN/cm2

T
log E0 0
T

daN/cm2

10,41
10,16
9,14

1,805
1,301
1,145

3,127
2,525
2,407

OTEH2016

ANEXAMPLEOFPROPELLANTGRAINSTRUCTURALANALYSISUNDERTHETHERMALANDACCELERATIONLOADS

Based on comparison with the master curve, one can see


that the three inserted points are sufficient to estimate a
similar flow of the master curve for the new propellant.
The tensile strength is shown in Figure 4.

but it is not possible to define the equation for the


regression curve.
20

2,0

Allowable strain, m (%)

18

Y =1,562-0,1484 X
New propellant

log (m T0/T)

1,5

1,0

0,5

16
14
12
10
8

Y =1,110-0,1289 X
Propellant from database

-6

-5

-4

-3

-2

-1

log (t/aT) = log


-6

-4

-2

Figure 6. Ultimate strain

log (1/RaT) = log

Creating the expressions for description the behavior of


the new propellant under all possible conditions, we got
the capability to calculate the real stresses and strains
under various types of loads and allowable mechanical
properties of the new propellant, which are valid for the
current damage assessment.

Figure 4. Tensile strength


The initial modulus is shown in Fig.5:
3,5

Y =2,849 - 0,1664 X
New propellant

log E (T0/T)

3,0

2.2. Strain rates under the effects of temperature


and axial acceleration

2,5

The first condition for a proper analysis of the effects of


acceleration on the stress occurrence in the propellant
grain is to determine the strain rate and the corresponding
mechanical properties of the propellant.

2,0

1,5

1,0

Y =2,20 - 0,138 X
Propellant from database

-6

-4

-2

An example of a propellant grain deformation due to the


axial acceleration is shown in Fig.7.
2

log (t/aT) = log

Figure 5. Initial modulus

Linear character of the two existing master curves (tensile


strength and initial modulus) allows the analyzer to adopt
the assumption that the master curves for the new
propellant also applies the similar linear character.
The same shape of the master curve is assumed for the
new propellant, shifted along the reduced time axis. This
assumption is quite acceptable and almost obvious from
the Figures 4 and 5, although there are some opinions that
this shift may be partly also in the direction of the vertical
axis [14].

a
b

Figure 7. Grain deformations under axial acceleration

According to [1, 3, 15] for the longitudinal deformation of


a propellant grain with a circular channel, due to the axial
acceleration, the following expression (8) is used:

In the case of ultimate strain, the analogy between the two


propellants is less visible because the number of measured
data is small, and dispersion of points is very large. In
Figure 6 the three measured points for the new propellant
are shown, along with the master curve for the propellant
from database. The position of the three measured points
looks like the position of the points on the master curve,

293

l =

3 g N b 2 r 2

a 2 ln b
2 E
2
r

axial displacement

radius of the hollow tube, a = 17,00 mm

outer radius of the grain, b = 59,25 mm

arbitrary radius of the grain

(8)

OTEH2016

ANEXAMPLEOFPROPELLANTGRAINSTRUCTURALANALYSISUNDERTHETHERMALANDACCELERATIONLOADS

gN

axial acceleration

propellant density, = 1,762 g / cm 3

propellant modulus

acceleration load. Dividing the strain with the time ( t a )


the strain rate is obtained:
d = 0, 0041 = 0, 0041
dt ta
E ta
E 35 103

On the basis of projected characteristics of the missile,


axial acceleration is equal to 38,5 g.
Since the real values of the mechanical properties of the
propellant were not known, a basic qualitative analysis
was made, as if the propellant was an elastic material,
using the finite element method. This analysis has enabled
the designer to define exactly the critical zones of the
propellant grain.
Splitting the propellant grain into the finite elements is
shown in Fig.8, with special attention to the grain
connections to the other elements of the rocket motor.

Figure 8. Finite element grid

Qualitative analysis has shown that the most loaded zone


is at the right end of the propellant grain (Fig.9) at the site
of contact with the compensator of temperature dilatation.
This zone is approximately about half the grain thickness,
at the radius r 0,5 (a + b) 38mm .
In this way, all the features in the expression (8) are
determined, excluding the value of the initial modulus E .

l (cm) =

0,82
E (daN cm2 )

(9)

d ( s 1 ) =
0.117
dt
E ( daN/cm 2 )

(11)

Expression (11) is undefined because the modulus is


unknown. The variable value of the modulus is defined by
the approximate expression in Fig.5:
T
log E 0 = 2,849 0,1664 log 1
 aT
T

(12)

In this case, an initial value of the modulus was arbitrarily


adopted, and the real value of the modulus is determined
during an iterative procedure by the expressions (11) and
(12). When the right value of the initial modulus of the
propellant is determined, at the same time also the value
of strain rate is determined (11).
Any analysis of this type is usually done at standard
temperature and after that also at the two extreme
temperature limits in the region of the rocket motor use.
For each temperature, the value of modulus is different,
and the iterative procedure of the modulus determination
is repeated.

2.3 Tensile strength definition in different cases


For the three different temperatures, the values of initial
modulus are determined, and also the corresponding
values of reduced time under the axial acceleration
loading. The reduced time is the feature on the abscissa of
all the three master curve diagrams for the mean
mechanical properties of the propellant (Figs.4-6). Finally,
based on the three different values for the reduced time
under the acceleration load, the values of tensile strength
are determined, using the expression on the Fig.4:
T
log m 0 = 1,562 0,1484 log 1
T
 aT

(13)

Figure 9. The most stressed zone of the propellant grain

These values are shown in Table 3, just to present an order


of magnitude in the change of tensile strength at different
temperature conditions under the same load.

The strain of the grain in the axial direction is equal to the


ratio between elongation and the basic length of the grain:

Table 3. Initial modulus and tensile strength under the


acceleration load at three different temperatures

() = l =
l0

0,82
l
=
200(cm) 200 E (daN cm 2 )

log

daN/cm2

daN/cm2

-40
+20
+50

543,5
204,0
172,5

0,085
3,241
3,933

28,18
12,05
10,49

(10)

( ) =

0, 0041
E (daN cm 2 )

According to the analysis of the rocket movement,


estimated time to reach its maximum acceleration is
approximately ( ta 35 ms ).This time is also the time to
reach the maximum strain of the propellant grain due to

It is seen in Table 3 that the tensile strength at -40oC is


about 3 times greater than the tensile strength at + 50oC.
Another type of load is caused by temperature dilatation.

294

OTEH2016

ANEXAMPLEOFPROPELLANTGRAINSTRUCTURALANALYSISUNDERTHETHERMALANDACCELERATIONLOADS

Maximum elongation and strain of the propellant grain


can be achieved in the range between the two temperature
extremes (-30 to + 50oC), but this range is not competent
for this calculation, as these are two seasonal temperatures
with a wide time interval between.
During a day over the season a change in temperature is
no greater than about 20oC, and the longitudinal
deformation of the propellant grain is:

3.

= l = T
l0

= 0,93 1040 C 1 20o C 0,186 106


-

seems logical, because the tensile strength is the


lowest at +50oC, regardless of whether the load is fast
or slow. This conclusion also might have been
expected, even though it differs substantially from the
example of case-bonded propellant grains, where the
temperature stresses are lower at high temperatures.
The greatest stresses occur at the right front side of
the grain, in the area of contact between the grain and
the compensator of dilatation. This can be seen in the
lower-right corner of the Fig.10. Brighter fields are
zones with higher stresses.

(14)

coeff. of thermal expansion of the propellant [6,

11]
This change can realistically happen in the range of about
8 hours. Then, the strain rate is:

0,186 10
 = =
t

8h

0,186 106
= 0, 646 1011
8 60 60

This strain is so small that the approximate expressions


(12) and (13) give us very low, unrealistic values for
modulus and tensile strength. For very small values of
reduced time, the linearity of the modulus and the strength
dependence stops [6, 11]. Just for comparison with the
values due to the acceleration, these unreliable values are
shown in Table 4:

4.

independently, similar to the principle of service life


estimation in the analysis of fatigue [17]. Then,
individual values of two different current damages
were simply summarized to get the total current
damage, using the convolution principle for linearviscoelastic materials:

Tab.le 4. Initial modulus and tensile strength under the


thermal load at three different temperatures
T
o

E
2

log

daN/cm

daN/cm2

-40
+20
+50

30,45
8,65
7,98

7,608
11,491
11,954

2,16
0,80
0,68

Figure 10. Load distribution


Analysis of the safety factor ( ) in the case of
simultaneous action of two completely different loads
is made using the model of single current damages
( d ) caused by individual loads that operate

d (t ) = d1 (t ) + d 2 (t )
(t ) 2 (t )
d (t ) = 1
+
m1 (t ) m 2 (t )

Just a brief look at the tensile strength values in Tables 3


and 4 shows that the tensile strength due to the
acceleration is about 15 times greater than the tensile
strength under the temperature dilatation.
When this difference is perceived, it becomes clear what
kind of problem occurs in the structural analysis if the two
different loads act simultaneously.

The reciprocal of the total current damage is equal to the


safety factor [3, 6, 7, 11].

(t ) =
1.

3. SHORT ANALYSIS DESCRIPTION


Due to insufficient data on mechanical properties of the
propellant, the results of structural analysis are only
approximate and are discussed in qualitative terms.
The analysis has led to the following conclusions:
1. Stresses due to temperature dilatation are almost
evenly distributed along the propellant grain. When
the acceleration load is acting, this balance disappears
and the right part of the grain, near the nozzle, is
more stressed (Fig.9). This conclusion could be
expected in advance.
2. Safety factor, determined by classical methods of
elastic analysis, is lower at +50oC than at -40oC. This

(15)

2.

295

1
d (t )

(16)

The critical case of two simultaneous loads arises


when the acceleration load acts onto the grain at
+50oC. According to the failure criteria based on
tensile strength, estimated value of the current
damage is close to the limits of propellant resistance.
But in this case the loads are compressive, and it is
known that the compressive strength is always greater
than the tensile strength, sometimes more than two
times [16], so the estimation of the safety factor is
satisfactory.
Both the results of the approximate structural analysis
and the uncertainty about the real resistance of the
propellant grain, initiated a new solution of the
connection between the left side of the grain and the
left flange of the motor (Fig.11). On the left side of
the propellant grain a solid part is set, that acts as an
inhibitor and also makes a threaded connection
between the grain and the motor flange.

ANEXAMPLEOFPROPELLANTGRAINSTRUCTURALANALYSISUNDERTHETHERMALANDACCELERATIONLOADS

3.

Figure 11. New design of the grain connection


The threaded connection accepts a large part of the
load that propellant grain suffers during the
acceleration. Load distribution is better. In Fig.12
dark fields along the grain can be seen, which indicate
the zones with lower stresses.

Figure 12. - Load distribution over the new grain design

4. CONCLUSION
This paper is an example of a basic level of viscoelastic
structural analysis, named "preliminary analysis", which
shows greater complexity of this approach in comparison
to simpler analysis of an elastic body. This example also
shows some problems that may arise in determination the
safety factor of a viscoelastic body, even when a complete
mechanical characterization of the viscoelastic material is
made. A viscoelastic rocket propellant grain is considered.
A simplified method is presented for determination the
mechanical properties of a composite rocket propellant
that has been tested only in standard conditions. A method
is explained for comparison and use the known results of a
completely tested similar propellant from the database.
The simultaneous action of two completely different loads
is analyzed, resulting in the problem of defining a failure
criterion, since the allowable mechanical properties of
viscoelastic materials strongly depend on the strain rate.
An approximate estimation has been made on the
dependence of mechanical properties of viscoelastic
composite propellant on strain rate and temperature. This
estimate enabled also an approximate, but qualitatively
good structural analysis of the propellant grain.
Unreliable results of the analysis, on the limit of the
propellant resistance, initiated the development of a new
design of the propellant grain, having higher reliability.
The final analysis confirmed a satisfactory reliability of
the new design.

References
[1] Williams,M.L., Blatz,P.J., Schapery,R.A.: Fundamental
Studies Relating to Systems Analysis of Solid
Propellants, Final report GALCIT 101, Guggenheim
Aero. Lab., Pasadena, Calif. (1961).

OTEH2016

[2] Fitzgerald,J.E.. Hufferd,W.L.: Handbook for the


Engineering Structural Analysis of solid Propellants,
CPIA publication 214 (1971).
[3] Solid propellant grain structural integrity analysis,
NASA Space Vehicle Design Criteria SP-8073,
(1973).
[4] R..Heller, .P.Singh: Thermal Storage Life of
Solid-Propellant Motors, Journal of Spacecraft and
Rockets, Vol.20, No2, (1983) 144-149.
[5] Williams,M.L.: Structural Analysis of Viscoelastic
Materials, AIAA Journal, Calif. Institute of Tecnology,
Pasadena, California, May (1964) 785-798.
[6] Gligorijevi,N.: Strukturna analiza pogonskih
punjenja raketnih motora sa vrstim gorivom,
Vojnotehniki Institut Beograd, (2013), ISSN 18203418, ISBN 978-86-81123-59-1.
[7] Gligorijevi,N., ivkovi, S Rodi,V., Suboti,S..
Gligorijevi,I.: Effect of Cumulative Damage on
Rocket Motor Service Life, Journal of Energetic
materials, Vol.33, Issue 4, DOI:10.1080 07370652.
2014.970245. (2015).
[8] Gligorijevi,N., Rodi,V., ivkovi,S. Pavkovi,B.,
Nikoli,M., Kozomara,S., Suboti,S.: Mechanical
Characterization of Composite Solid Rocket
Propellant based on hydroxy - terminated
polybutadiene,
Chemical
Industry,
2015.
DOI:10.2298/HEMIND 150217067G.
[9] Landel,R.F., Smith,T.L.: Viscoelastic Properties of
Rubberlike Composite Propellants and Filled Elastomers, ARS Journal, Vol.31, No.5, (1960) 599-608.
[10] Gligorijevi,N., Rodi,V., Jeremi,R., ivkovi,S.,
Suboti,S.: Structural Analysis Procedure for a Case
Bonded Solid Rocket Propellant Grain, Scientific
Technical Review, Vol.61, No.1 (2011) 1-9.
[11] Gligorijevi,N.: Istraivanje pouzdanosti i veka
upotrebe raketnih motora sa vrstom pogonskom
materijom (Solid propellant rocket motor reliability
and service life research), Ph.D. Dissertation,
Military Academy, Belgrade, Serbia (2010).
[12] Gligorijevi,N. Prilog strukturnoj analizi vezanog
pogonskog punjenja raketnog motora sa vrstom
pogonskom materijom, Mainski fak. u Beogradu,
agistarski rad, (1989).
[13] Williams,M.L., Landel,R.F., Ferry,J.D.: The Temperature Dependence of Relaxation Mechanisms in
Amorphous Polymers and Other Glass-forming
Liquids, J. Am. Chem. Soc., 1955, 77 (14), pp 3701
3707, DOI: 10.1021/ja01619a008.
[14] Bills,K.W.jr., Time-Temperature Superposition Does
Not Hold for Solid Propellant Stress Relaxation,
Aerojet Solid Propulsion Company, Sacramento,
California, AIAA Journal, CPIA Publication 283
(1983).
[15] ,..: -
, , (1972).
[16] Thrasher,D.I., Hildreth,J.H.: Structural Service Life
Estimate for a Reduced Smoke Rocket Motor, Journal
of Spacecraft, Vol.19, No.6 (1982) 564-570.
[17] Miner,M.A.: Cumulative Damage in Fatigue, Journal
of Applied Mechanics, Vol.12 (1945) 159-164.

296

TRANSFER OF GRANULATED PBX PRODUCTION TO


THE INDUSTRIAL SCALE
SLAVICA TERZI
Military Technical Institute, Belgrade, slavica@alogodesk.com
STANOJE BIOANIN
Prva Iskra-Namenska proizvodnja, Bari
ALEKSANDAR OREVI
Prva Iskra-Namenska proizvodnja, Bari
IVKA KRSTI
Prva Iskra-Namenska proizvodnja, Bari
BILJANA KOSTADINOVI
Prva Iskra-Namenska proizvodnja, Bari
ZORAN BORKOVI
Military Technical Institute, Belgrade

Abstract: The transfer of granulated PBX (Plastic Bonded eXplosive) production technology to the industrial scale, in "Prva
Iskra - Namenska proizvodnja" Company, is established. Two compositions of granulated PBX, based on octogen (HMX)
and polymer phlegmatizers (Estane and Viton A) are prepared by aqueous/solvent slurry coating tehnique, on laboratory
and industrial scale. The quality is analyzed and compared for laboratory and industrial prepared granulated PBX samples.
The quality of polymer coating layer on HMX crystals is examined by microscopic analysis and presented in photographs.
The comparative analysis of phlegmatizers contents in PBX samples is done, as well as granulometric analysis of PBX
granules and the sensitivity tests to friction and impact.
Keywords: high explosives, granulated PBX, coating technique, laboratory and industrial PBX processing.
The laboratory production technology of granulated PBX is
defined in MTI until 2011. The processing of two PBX
compositions, FOP-5E (similar to LX-14 [5]) and FOP-5VA
(similar to PBXN-5 [6]), on laboratory scale (production
batch mass from 50 g to 1 kg) are established, and the
parameters of PBX compacting proces are defined. The
characterisation of granulated PBX samples and pressed
PBX charges are done [1-4]. PBX compositions obtained
with these technology satisfy the quality requirements of
the military standards [5, 6].

1. INTRODUCTION
A large number of research projects in the world are related
to reduction of warhead vulnerability and development of
thermostable explosives formulations with low-impact
sensitivity. PBXs (Plastic Bonded Explosives) are mixtures
of explosive materials and polymer binder/phlegmatizers.
PBXs have been commonly used in both military and
industry because of their improved safety, enhanced
mechanical properties, and reduced vulnerability during
storage and transportation. Granulated PBXs are crystalline
high explosives coated by plastic phlegmatizers. These
explosive compositions are known to have highperformance characteristics, but low friction and impact
sensitivity. Pressed PBXs are used as busters and main
charges in most modern warheads.

In the period 2013 - 2014 transfer of granulated PBX


production technology to the industrial scale, and quality
verification of pressed PBXs by their applications in
warhead, were realized. The industrial processing of PBX
was implemented on the adapted technological line for the
production of phlegmatized explosives in collaboration with
"Prva iskra Namenska proizvodnja" Company, Bari
(PIN).

Following the new trends and quality requirements for


explosives charges, new cast and granulated PBXs
compositions are invented in the Military Technical Institute
(MTI).

The quality of pressed PBX was earlier confirmed through


the functional testing of thermobaric warhead models (PBX
297

OTEH2016

TRANSFEROFGRANULATEDPBXPRODUCTIONTOTHEINDUSTRIALSCALE

is used as central explosive charge) and hollow-charge


warhead (PBX is used as main explosive charge) [7-9].

The ratio of organic solvent volume and polymer

This paper gives a brief overview of realized laboratory


examinations and process of defining the technological
parameters of industrial production of granulated PBX, and
compared characteristics of PBX samples produced on
laboratory and the industrial level for two granulated PBX
that have following chemical compositions:
FOP-5E: 95 % HMX and 5 % Estane;
FOP-5VA: 95 % HMX and 5 % Viton A.

The ratio of liquid and solid phases (water and

2. PREPARE OF GRANULATED PBX ON


LABORATORY SCALE

phlegmatizer mass;

Production technology of granulated PBX is based on the


precipitation of polymer phlegmatizer from solution in organic
solvent, and forming of coating layer on the particle surface of
the explosive. There are few techniques for production of
granulated PBXs [10]. All procedures were performed in the
liquid phase (aqueous slurry of the cristalline explosive and
solution of phlgmatizer in organic solvent) and named as
aqueous-slurry (or water-slurry) methods.

crystalline high explosives);


The mixing speed of aqueous slurry during
components dosage and coating process of octogen;
The dosing speed of the phlegmatizer solutions in
aqueous slurry of the crystalline explosive;
The temperature value of aqueous slurry of the
crystalline explosive during phlegmatizer solutions
dosage;
The max. temperature of coating process (i.e. the rate
of solvent evaporation);
The duration of explosive coating process (i.e. the time
of solvent evaporation).

Laboratory equipment for laboratory production of PBX


batch (mass 50 g) is shown in Figure 1.

2.1. Components of explosive molding powder


PBX compositions were prepared by water - slurry
phlegmatization tehnique, in laboratory scale. Following
raw materials were used:
The crystalline octogen (octahydro-1,3,5,7-tetranitro1,3,5,7-tetrazocine), HMX type DYNO class A/C (Dyno
Nobel) was used as explosive component. Bulk density
of used HMX was 1003 g/dm3. The size range of 80 %
HMX crystals was 140 m - 560 m. Microscopic
photograph of HMX DINO A/C is presented on Figure 5a.
Estane (BF Goodrich Co.) was used as phlegmatizer.
Estane is thermoplastic polyuretane i.e. poly(esteruretane)-copolymer.
Viton Fluoroelastomer Type A (Du Pont Company) was
used as phlegmatizer. Viton A is a copolymer of
vinylidene fluoride and hexafluoroprophylene.

Figure 1. Laboratory processing equipment and PBX


batch mass 50 g
PBX samples are produced by adding a phlegmatizer solution
to aqueous slurry of the crystalline octogen in glass vessel reactor (volume 1 dm3). The reaction mixture solution is heated
in improvised water bath and mixed by propeller stirrer. The
temperature of the reaction mass is maintained on constant
value until evaporation of the complete content of organic
solvent. During homogenization and heating phase, in which
polymer phlegmatizer was dissolved, the organic solvent is
removed by evaporating from system whereupon the
phlegmatizer deposited on the surface of explosive crystals,
forming a coating layer on the surface of the octogen particles
(i.e. granules of PBX).

Excellent physical, chemical and mechanical characteristics as


well as excellent compatibility with high explosives qualified
polymers Estane and Viton A for coating process [10].
2.2. Defining of explosive's coating technique
The defining of explosive's coating technique has started by
production of laboratory PBX batch (mass 50 g). The first
step was selection of the most suitable solvent for the
polymer phlegmatizer. The selected organic solvent to be
used in coating process of explosive must satisfy specific
requirements [1]. The best quality of PBX was obtained
with combination methylethyl ketone - Estane for FOP-5E
and ethyl acetate - Viton A for FOP-5VA [2, 3].

The characterisation results showed that two granulated


PBX compositions, based on octogen and selected
polymers, completely satisfy quality requirements [5, 6].
They were obtained by optimized PBX processing technique
[2, 3] by using a methylethyl ketone for dissolving of Estane
(FOP-5E), and ethyl acetate for Viton A (FOP-5VA).
PBX quality is confirmed on larger laboratory batches (mass
up to 1 kg). Laboratory equipment (double-jacket reactor,
volume up to 10 dm3, propeller stirrer, ventilation systems
and movable filtering system) for processing larger PBX
batches is shown in Figure 2.

Following technological parameters of polimer coating


process (PBX processing) are defined and optimized:
The mass of octogen and the mass of polymer
phlegmatizer;
298

TRANSFEROFGRANULATEDPBXPRODUCTIONTOTHEINDUSTRIALSCALE

OTEH2016
The following corrections on the processing line and
supporting technological equipments are made:
The water-dosage double-jacket reactors was designed
and instaled;
The phlegmatizer solution-dosage double-jacket
reactors was designed and instaled;
The industrial condenser was designed and instaled;
The container/reactors for condensate (recycled
solvent) collecting was designed and instaled;
The vacuum pump and pipe system for transfer of
phlegmatizer solution (from prepared vessel/reactor to
dosage vessel/reactor) were designed and assembled;
The new filtering system for PBX washing and
filtering was designed and assembled (Figure 4).
The adapted technological line (Figure 3) prevents vapors of
organic solvents to get into the ambient atmosphere during
PBX processing. Recycled solvent is collected and stored so
that it can be reused to produce a new PBX batches. Loss of
the solvent was approximately 10 %, but this amount is
offset by the addition of fresh solvent in the next batch of
PBX.

Figure 2. Laboratory processing equipment and PBX


batch mass 1 kg
Further research has focused on increasing production batch
of granulated PBX.

3. PRODUCTION OF GRANULATED PBX ON


INDUSTRIAL SCALE

3.2. Components of industrial produced granulated


PBX

The small capacity and lack of technical conditions for the


condensation of solvents after removal (evaporation) from
the reactor (i.e. the impossibility of its reflux) were the most
important disadvantages of used laboratory equipment. This
reflects both the economic and the environmental aspect of
the practical application of PBX processing technology. The
solvent reflux has been introduced during the transfer of
technology in industrial scale. The coating of explosives
(PBX processing) was "closed" in PIN. The application of
solvent reflux in PBX processing was reduced pollution of
the environment and production cost of granulated PBX
(recycled solvent can be used in the next PBX batch).

Following explosives were used:


Recycled HMX (PIN), for Lot I. The size range of
90 % HMX crystals was 75 m - 500 m;
HMX, type DYNO class A/C, for Lot II (same as for
laboratory PBX batches).
Folowing polymer phlegmatizers were used:
Polyuretane elastomer Estan (BF Goodrich Co.);
Viton Fluoroelastomer Type A (Du Pont Company).
3.3. Defining of the industrial PBX production
The industrial PBX production process was defined for both
PBX compositions, FOP-5VA (HMX/Viton A 95/5) and
FOP-5E (HMX/Estane 95/5).

3.1. Adaptation of industrial processing equipment


The industrial processing line (for the production of
phlegmatized explosives compositions) was adapted (Figure
3) and the PBX production technology was transfered to the
industrial scale in PIN, in 2012.

Phlegmatizer solution is prepared in double-jacket reactor


and after that it is transfered to dosage reactor by vacuum
pump and pipe system. Further, 250 - 300 liters of water and
50 kg cristalline octogen are added, and obtained aqueous
suspension of the crystalline octogen continuously heated
by technical steam through jacket wall and mixied by
propeller stirrer. The plegmatizer solution is dosed, in a thin
stream, in the heated aqueous suspension of octogen. The
adding is started at temperature ~ 50C and total volume of
plegmatizer solution is dosed over a period of 20 minutes
than temperature is increased to ~ 70oC. The solvent
evaporating is carried out while until the temperature inside
the reaction mixture solution reaches 100C.
At the end of the coating process the reaction mixture
solution is cooled to temperature 50 - 55 C. Then it is
dropped on a new filtering system where PBX granules are
washed and decanted (Figure 4). Wet PBX granules are
packed in polyethylene bags and stored in the warehouse of
explosives.

Figure 3. Adapted technological line for the PBX


production, PIN

299

OTEH2016

TRANSFEROFGRANULATEDPBXPRODUCTIONTOTHEINDUSTRIALSCALE

The content of Estane in FOP-5E Lot II was slightly higher


(5,6%) from the designed value (5,0 0,5%), but result for
FOP-5E Lot I was regular (Table 1).
The values of bulk density of both industrial batches (Table
1) are in accordance with the standard values for the similar
phlegmatized explosive compositions ( 700 g /cm3) and
suitable for pressing.
The sensitiveness of octogen crystal to friction is 128 N, and
to impact is 2 J. The presence of Estane in FOP-5E samples
caused reducing their sensitiveness to friction and impact
(Table 1) compared to the sensitiveness of the crystal
octogen. The sensitiveness to friction and impact of
industrial obtained FOP-5E granules were at a level of
sensitivity of laboratory samples FOP-5E (Table 1).

Figure 4. Industrial PBX batch, mass 50 kg; packing in


polyethylene bags
The temperature values and stirring rate of aqueous slurry of
the crystalline explosive and reaction mixture solution are
varied during optimization of technological parameters of
industrial PBX production. Analysis of particle size and
chemical composition were done for all PBX batches. The
obtained results were used for the correction of
technological parameters for the production of next PBX
batchs [4].

The quality of granulated PBX samples (coating


effectiveness) is determined by microscopic analysis.
Microscopic photographs of PBX samples are presented in
Figures 6 - 9.

4. METHODS OF CHARACTERISATION
The characterisation of granulated PBX was realised in PIN
and MTI laboratories and included the following tests:
Analysis of the chemical composition according to [5,6].
Analysis of the granulometric size on selected sieves
(425 m, 600 m, 850 m, 1190 m and 1600 m)
according to [11].
Determination of bulk density according to [11].
Microscopic analysis of granulated PBX samples. All
PBX samples and used octogen crystals were
photographed by digital camera Canon Power Shot S40,
and Stereo microscope LEICA.
Determination of sensitiveness to friction and impact of
explosives, on JuliusPeters apparatuses, according to
[12, 13].

a - HMX DINO A/C

b recicle HMX, PIN

Figure 5. Crystalline of octogen

5. PRESENTATION AND DISCUSSION OF


CHARACTERIZATION RESULTS
5.1. Characteristics of FOP-5E
Figure 6. Sample FOP-5E, Lot 50 g

The values of bulk density, chemical composition and


sensitiveness to friction and impact of different production
batch (Lot) FOP-5E are presented in Table 1.
Table 1. Characteristics of laboratory and industrial batch
(Lot) ofexplosive FOP-5E

Content of Estane
(%)
Bulk density (g/dm3)
Sensitiveness to
friction (N)
Sensitiveness to
impact (J)
Microscopic analysis,
Figure

Lot
50 g

Lot
1 kg

Lot I
50 kg

Lot II
50 kg

5,35

4,75

5,60

803

767

930

813

216

240

216

6,5

Figure 7. Sample FOP-5E Lot 1kg

300

OTEH2016

TRANSFEROFGRANULATEDPBXPRODUCTIONTOTHEINDUSTRIALSCALE

The contents of Viton A in industrial batchs FOP-5VA (Lot


I - 5,12 % and Lot II - 5,04 %) were close to the designed
value (5,00,5 %). Values of bulk density of both industrial
batches were slightly lower (Table 2) than the standard
values for the similar phlegmatized explosives
compositions. However, it can be controlled by varying the
mixing rate of reaction mixture solution, because it is
directly connected to the distribution of the granule size in
the sample of the explosive composition. Also, the addition
of emulsifiers affects the agglomerate formation and
granulometric size distribution [2, 3].

The results of microscopic analysis of industrially obtained


FOP-5E granules (Fig 8 and 9) show that octogen crystals
(Figure 5) were uniformly coated with polymer material
and, such as laboratory obtained samples (Figure 6 and 7).

Table 2. Characteristics of laboratory and industrial batch


(Lot) of explosive FOP-5VA

Content of Viton A
(%)
Bulk density (g/dm3)
Sensitiveness to
friction (N)
Sensitiveness to
impact (J)
Microscopic analysis,
Figure

Figure 8. Sample FOP-5E, Lot I 50 kg

Lot
50 g

Lot
1 kg

Lot I
50 kg

Lot II
50 kg

5,41

5,16

5,12

5,04

822

738

647

686

180

168

11

12

13

14

The sensitiveness to friction and impact of industrially


obtained FOP-5VA granules were at a level of sensitivity of
laboratory samples FOP-5VA (Table 2).
The quality of granulated PBX samples (success of coating
process) is determined by microscopic analysis.
Microscopic photographs of PBX samples are presented on
Figures 11-14.

Figure 9. Sample FOP-5E, Lot II 50 kg


A comparative graphical representation of particle size
distribution (proportion of components that remain on the
selected sieves) of a laboratory and two industrial FOP-5E
batches is presented on Figure 10.

80

% m/m

100

60
40

Figure 11. Sample FOP-5VA, Lot 50 g


grain size (m)

Figure 10. Particle size distribution of FOP-5E


The largest mass content in the FOP-5E Lot 50 g and Lot II
50 kg was observed for sieve fraction of 420 m - 800 m,
while granules of Lot I 50 kg are quite smaller (the largest
fraction of 315 m 600 m).
5.2. Characteristics of FOP-5VA
The values of bulk density, chemical composition and
sensitiveness to friction and impact of explosive FOP-5VA
are presented in table 2.

Figure 12. Sample FOP-5VA, Lot 1 kg

301

OTEH2016

TRANSFEROFGRANULATEDPBXPRODUCTIONTOTHEINDUSTRIALSCALE

6. CONCLUSIONS
The parameters of octogen phlegmatization with 5 %
polymer phlegmatizer (Estane and Viton A) by water-slurry
process are defined.
The industrial processing equipment were adapted and the
PBX production technology (based on water-slurry
phlegmatization technique) was transferred to the industrial
scale in "Prva iskra Namenska proizvodnja" Company.
The industrial PBX production process was defined for two
explosive compositions based on octogen and polymer
phlegmatizers: FOP-5E (HMX/Estane 95/5) and FOP-5VA
(HMX/Viton A 95/5).

Figure 13. Sample FOP-5VA, Lot I 50 kg

Microscopic analysis showed that all industrial produced


PBX batches were well phlegmatized. The increased mass
of PBX production batch did not affect (no bad influence)
on PBX quality. The results of comparative characterization
of laboratory and industrially produced PBX (bulk density,
chemical composition and sensitiveness to friction and
impact of explosives compositions) showed that quality of
industrially obtained PBX samples were similar to
laboratory PBX batches.
The used organic solvent (methylethyl ketone and ethyl
acetate) are recycled during industrial production of
granulated PBX. This reduced PBX production cost, while
simultaneously conditions for safe production and
environmental protection are provided.

Figure 14. Sample FOP-5VA, Lot II 50 kg


Sample of FOP-5VA Lot I contains a number of big and
rounded granules (Figure 13). Granules of Lot II are smaller
and less spherical especially when compared with the
obtained laboratory batches of FOP-5VA (Figure 11 and
14). These granules characteristics did not adversely affect
the sensitiveness to friction and impact of FOP-5VA Lot II.
Namely, the sensitiveness to friction and impact of
industrially obtained FOP-5VA granules was similar to
sensitivity of laboratory obtained samples (Table 2). A
comparative graphical representation of particle size
distribution of a laboratory and two industrial FOP-5VA
batches is prsented on Figure 15.

Based on these results it can be concluded that the transfer


of granulated PBX production technology to the industrial
scale is successfully established.
References
[1] Terzi S. "Influence of coating techniques and organic
solvents on the quality of granulated PBX based on
HMX", Scientific Technical Review, 2007.,
Vol.LVII, No 1, pp. 34-38.
[2] Terzi S. "Granulisani PBX na bazi oktogena i
termoplastinog flegmatizatora", Elaborat, VTI-04-010525, Beograd, 2008.
[3] Terzi S. "Granulisani PBX na bazi oktogena i
Estana", Elaborat, VTI-04-01-0661, Beograd, 2011.
[4] Terzi S. "Definisanje tehnologije izrade granulisanih
PBX na industrijskom nivou", Tehniki izvetaj, VTI04-01-0733, Beograd, 2012.
[5] MIL-H-48358(AR) Military specification HMX/resin
explosive composition LX-14-0
[6] MIL-E-8111A Military specification explosive, PlasticBonded Molding Powder (PBXN-5)
[7] Savi S., Kraljevi S., Anti G. "Bojeva glava 9H110M
sa LKE (PBX)- Ivetaj o ispitivanju efikasnosti u
statikim uslovima Opit 3" VTI 02-01-0995, 2008.
[8] Savi S., Terzi S. "Izvetaj sa verifikacionih
ispitivanja presovanog oktogena flegmatizovanog
polimerom FOP-5E", VTI-02-01-0294, 2014.
[9] Terzi S. "Transfer tehnologije izrade granulisanog
PBX u industrijske uslove i verifikacija njegovog
kvaliteta kroz primenu u kumulativnoj municiji"
Elaborat, VTI-04-01-0793, Beograd, 2014.

80

% m/m

100
Lot 50 g
Lot I, 50 kg
Lot II, 50 kg

60
40
20
0
0

315

420

600

800

1190

1600

grain size (m)

Figure 15. Particle size distribution of FOP-5VA


FOP-5VA Lot II (the largest part: 315 600 m) has a
similar grain size distribution as a laboratory sample made
(Lot 50g), while granules of Lot I are quite bigger (the
largest part: 600-1190 m). The formations of small
granules are caused by the higher stirring speed during the
octogen coating process.
302

TRANSFEROFGRANULATEDPBXPRODUCTIONTOTHEINDUSTRIALSCALE

[10] Teipel U. "Energetic Materials: Particle processing


and characterization", WILEY-VCH, Weinheim,
2005.
[11] SORS-8457 "Metode kontrolisanja brizantnih
eksploziva"
[12] EN 13631-4 "Explosives for civil uses High

OTEH2016
ehplosives - Part 4: Determination of sensitiveness to
impact of explosives."
[13]EN 13631-3 "Explosives for civil uses High
ehplosives - Part 3: Determination of sensitiveness to
friction of explosives."

303

EXPLOSIVE REACTIVE ARMOR ACTION AGAINST


SHAPED CHARGE JET
DEJAN MICKOVI
University of Belgrade, Faculty of Mechanical Engineering, dmickovic@mas.bg.ac.rs
SLOBODAN JARAMAZ
University of Belgrade, Faculty of Mechanical Engineering, sjaramaz@mas.bg.ac.rs
PREDRAG ELEK
University of Belgrade, Faculty of Mechanical Engineering, pelek@mas.bg.ac.rs
NENAD MILORADOVI
Ministry of Defence of the Republic of Serbia, Belgrade
DRAGANA JARAMAZ
University UNION-Nikola Tesla, Faculty of Civil Management, Belgrade, jaramaz.s@sbb.rs
DUAN MICKOVI
EDePro, Belgrade, dusan.mickovic@gmail.com

Abstract: Study of interaction of explosive reactive armor (ERA) with shaped charge jet is the basis for evaluation of the
effectiveness of ERA. The physically based theoretical model of this interaction is given. It is incorporated in the NERA
computer code. The influences of backward moving plate and forward moving plate thickness, explosive layer thickness, jet
attack angle, and distance between ERA and main armor are investigated. Computational results of NERA code are
compared with experimental data. The computational and experimental results of penetration in the steel armor target are
in good agreement. The results of NERA code calculations reveal the possibilities for an improvement of ERA efficiency.
Keywords: Explosive reactive armor (ERA), Shaped charges, Penetration, Armor efficiency.

1. INTRODUCTION

2. THEORETICAL MODEL

Shaped charge jets are one of the most lethal threats on main
battle tanks. In the field of armor protection the explosive
reactive armor (ERA) undoubtedly has one of the most
successful defeat mechanisms for reducing the lethality of
these jets. The ERA is a type of add-on armor that consists
of cassettes made of two metal plates with an explosive
layer in between. The ERA is placed at a certain distance
from the main armor to enhance its performance. When a
shaped charge jet hits the cassette, the explosive is
detonated and the plates are pushed to the side. The
movement of the plates causes the impact point of the jet to
constantly shift to new untouched regions, increasing the
dynamic effective thickness of the plates. Study of
interaction of ERA with shape charge jet is the basis for
evaluation of ERA effectiveness. In this paper the new
physically based theoretical model for ERA action against
shaped charge jet is proposed. The compu-tational results
are compared with experimental inve-stigations of influence
factors on ERA effectiveness [1].

The theoretical model starts from the following


configuration. An ERA sandwich hBMP/hex/hFMP (hBMPbackward moving plate thickness, hex-explosive layer
thickness, hFMP - forward moving plate thickness) of length
L and width W is placed at distance Z0 from the main armor
(figure 1).

Figure 1. Sketch of the ERA cassette attacked by the


shaped charge jet

304

OTEH2016

EXPLOSIVEREACTIVEARMORACTIONAGAINSTSHAPEDCHARGEJET

M
M
+ M FMP
U FMP = 2 E FMP BMP
+
M BMP
C
1 2
h
h
M
+ 1 + 4 FMP ex + ex
3
3 C L W

The ERA is struck in the centre of backward moving plate


(BMP) by a shaped charge jet at an angle of attack . The jet
length (lj) and jet tip velocity (Vj) are defined from the
warhead nominal penetration in steel target Pnom at the builtin standoff distance (S0), supposing linear velocity profile
along the jet and jet tail velocity (VjT) equal to cutoff
velocity. If detailed design characteristics of the warhead are
known, the jet parameters can also be obtained by CUMUL
computer code [2].

In equations (3) and (4) 2 E is the Gurney constant,


M BMP = h BMP BMP , M FMP = h FMP FMP , C = hex ex
where FMP and ex are FMP density and explosive density.

There are three main phases of interaction between the ERA


cassette and the jet: perforation of ERA cassette by the jet
tip, jet precursor formation and jet interaction with moving
plates.

From momentum consideration at the moment of detonation


the BMP acceleration time to the velocity UBMP can be
defined by

BMP =

2.1 Perforation of ERA cassette by the jet tip


The first phase of ERA/jet interaction is a head-on collision
between the jet and the cassette. The cassette is penetrated, a
hole is created by the jet tip in all layers, and the jet tip is
eroded.

dhtip ,BMP =

Y BMP
2
te
d cf2 d tip
2

BMP

U BMP BMP hBMP


pCJ

(5)

Here pCJ is the Chapman-Jouguet pressure.


The FMP acceleration time (FMP) is obtained from equation
(5) with FMP parameters.

During the oblique perforation (0) an ellipse shaped hole


is created in the metal plates, whose largest diameter
coincides with the jet tip path projection on the plate. The
largest hole diameter in the BMP created by the jet tip can
be estimated by the Naz formula:
d cf2

(4)

2.2 Jet precursor formation


During the second phase of ERA-jet interaction the front
portion of the jet interacts with detonation products. The
time interval from the end of BMP penetration by the jet to
the moment of the complete acceleration of plates can be
estimated as

(1)

tact = ti + max ( BMP , FMP )

(6)

During this time the front part of the jet moves forward
through cassette holes without interaction with the plates,
and penetrates the target. This effect was verified
experimentally by Held [6], and was further analyzed by
Mayseless [7], where the name precursor for the leading
edge of the jet was first presented. The sketch of precursor
formation is presented in figure 2.

where dcf is the crater diameter in a semi-infinite target, dtip


is the jet tip diameter, YBMP and BMP are the BMP yield
strength and density and te is an interaction time
proportional to the craterisation time. More details
concerning these parameters are given in [3].
If ERA plates have the same properties, the hole diameter
created in the forward moving plate (FMP) by the jet tip
(dhtip,FMP) is also given by equation (1).
The initiation time of covered high explosive charge with
shaped charge jet can be estimated from the empirical
equation [4]:
ti = 122 000 V j6

[ s]

(2)

where particularly in this equation jet tip velocity Vj is


given in mm/s.
When a high explosive is detonated, metal plates are
accelerated to high velocities. The BMP velocity (UBMP) and
FMP velocity (UFMP) are given by expressions for
asymmetric sandwiches with Chanteret correction factors
for the plate length (L) and width (W) [5]:
U BMP

M
M
+ M FMP
= 2 E BMP BMP
+
M FMP
C

1 + 4 M BMP hex + hex

3
3 C L W

1/ 2

Figure 2. Sketch of the precursor formation


The jet is elongated during the time interval t act , and total
length of the jet at the moment t act is:

(3)

l *j = l j + V j V jT t act

and the length of precursor is l prec = V j t act .

305

(7)

OTEH2016

EXPLOSIVEREACTIVEARMORACTIONAGAINSTSHAPEDCHARGEJET

It is supposed that rear part of the jet interacts with the


BMP. Therefore, the jet tail velocity after interaction with
the BMP ( V *jT ) is defined from the penetration equation of

The precursor tail velocity (VprecT) is equal to the remaining


jet front velocity ( V *j ), and is defined as:

V precT = V *j = V j V j V jT

) V j *t act

continuous jet:

(8)

lj

1 BMP

P
V jT* = V jT BMP + 1
S0

According to expression for a penetration depth of a


continuous jet of non-uniform velocity distribution, the
target penetration by precursor is [8]:
T

+ h + hFMP + Z 0 V j
h
Pprec = S0 + BMP ex
*
cos

V j

(12)

where BMP = j BMP .


The forward moving plate is attacked by the jet with front
and tail velocities V *j and V *jT . The Yadav and Kamat [11]

1 (9)

approach for forward moving plate interaction with


continuous jet is accepted. The impact velocity of the jet is:

where T = j T and T is the target density.

()
V jH
= V j* U FMP cos '
1

(13)

2.3 Jet interaction with moving plates


The front portion of the jet creates an initial hole in the

The third phase is the jet interaction with moving plates.


During interaction with backward moving plate, the jet is
deflected. The attack angle after deflection of the jet is =
+. The expression for the jet deflection angle is given in
[3].

(1)
forward moving plate. The diameter of this hole, dhFMP
,
can be determined from equation (1) with FMP parameters

(1)
FMP, hFMP, YFMP, jet diameter dj, jet velocity V jH
and

impact angle ' .

The surface cut made by the jet in the backward moving


plate consists of the jet tip hole and the slit (figure 3). The
presence of such a final hole shape like a key hole has been
reported [7, 9].

(1)
The following jet passes through the hole dhFMP
until it
shifts transversely a distance

X (1) =

(1)
dj
dhFMP
2

(14)

The velocity of the last element of the jet that enters the
initial hole is:
(1)

(1)

V jC = V jH

Figure 3. Sketch of the surface cut in the backward


moving plate produced at oblique impact of the jet

L slit =

l prec U BMP sin

V *j cos + U BMP

hFMP
(1)
V j(1) = V jH
1 +
'
S

01 cos

(10)

(15)

1 FMP

(16)

where FMP = j FMP .


The penetration in the target (main armor) by the jet going
out of the first hole is given by:

The penetration of backward moving plate by the jet is


estimated as:

PBMP

The velocity of the front of out going jet is:

Complete surface cut in BMP is LBMP = dhtip ,BMP + Lslit .

L
h
= BMP slit
cos dhBMP

where S01 = S0 + ( hBMP + hex ) cos ' .

The slit length produced by the jet in the backward moving


plate is [10]:
l *j

(1)
(1)
V jH
dj
dhFMP
1 +
2 S01 U FMP sin '

(1)

' T
hFMP + Z 0 V j + U FMP cos

(17)

P1 = S01 +
1

(1)

cos ' V jC

+ U FMP cos '

(11)

The hole diameter created by the jet in the slit dhBMP can be
determined from equation (1) with jet diameter dj and jet
velocity V *j .

Due to the FMP motion the jet becomes completely obstructed


(2 ) is produced. This
and the second hole of diameter dhFMP
process of the plate interaction with the jet continues until
either the jet attains its tail velocity V *jT during penetration of

For the impact in the centre of the BMP the slit length in
equation (11) is L slit (L dhtip ,BMP ) 2 .

the plate or the plate reaches the target and stops.


306

OTEH2016

EXPLOSIVEREACTIVEARMORACTIONAGAINSTSHAPEDCHARGEJET

If the distance Z0 is sufficiently long then the target


penetration due to the jet/FMP interaction is:
N

PFMP ,int = Pi

produced by the jet at different angles of attack are


presented in figure 4.
A very good agreement of calculated and measured values
of total cut in the case of 120 mm warhead can be seen from
figure 4. Thus, the approach used in NERA code for the
jet/BMP interaction seems to be verified.

(18)

i =1

where N is the total number of holes made by the complete


length of the jet passing through the moving plate and Pi is
the penetration of the jet part going out of i-th hole.

In the case of attack angle = 0 only static penetration of


ERA plates takes place. The penetration in the target is
mainly reduced by an interaction of the jet with explosive
detonation products. The experimental results of target
penetration at = 0, given in [1], are used to obtain an
estimation of the coefficient as:

The total penetration in the target is:


Ptot = Pprec + PFMP ,int

(19)

where the coefficient (<1) takes into account the


reduction of target penetration caused by the jet interaction
with high explosive detonation products.

Pexp = 0D
Pnom

(24)

If the distance Z0 is small, the FMP impacts the target before


the whole length of the jet passes through it. The velocity of
the jet element that strikes the plate at the moment when it
reaches the target is:

V jL =

U FMP V j* ( S01 + Z 0 cos ' )

(20)

U FMP S01 + Z 0 V j*

The velocity of the jet front after penetration of the FMP at


rest is:

hFMP
V jR = V jL 1 +

'

cos
+
+
S
h
Z

FMP
01
0

1 T

(21)

The residual penetration in the target is obtained as


T

h
+ Z V jR
PR = S01 + FMP ' 0 * 1

cos V jT

Figure 4. Total cut in BMP of ERA 8/8/1 made by 64


mm and 120 mm jets at different angles

(22)

Penetration of 120 mm and 64 mm shaped charge jets in


RHA target protected by ERA 8/8/1 and 3/15/1 at different
attack angles are given in figures 5 and 6.

In this case the total penetration in the target is:


Ptot = Pprec + ( PFMP ,int + PR )

(23)

where PFMP,int is obtained from equation (18) with number


of holes, less than N, created in FMP till the end of the plate
motion.
The computer code NERA is created for calculation of ERA
interaction with shaped charge jet. The NERA code
represents an improvement of ERASC code reported in [12].
The code enables a detailed analysis of parameters
influencing ERA efficiency.

3. EXPERIMENTAL RESULTS AND


MODEL VERIFICATION

Figure 5. Penetration of 120 mm jet in RHA target


protected by ERA 8/8/1 and 3/15/1 at different angles

Experimental results of Ugri [1] served as a basis for


NERA code verification. ERA sandwiches consisted of steel
plates 250x150 mm with PETN explosive layer. Sandwiches
were placed parallel to the target at the distance Z0=20 mm
and were attacked at angles of 0, 45 and 65 by 64 mm
and 120 mm warhead shaped charge jets with standoff
distance of three calibres (S0=3d).

A very good agreement of computational results with


experimental data can be seen in figure 5 for the penetration
of RHA target by 120 mm shaped charge jet as a function of
attack angle. Calculations show that ERA 3/15/1 efficiency
is reduced at attack angles 55 and 65. This is caused by an
incomplete jet/BMP interaction ( Lslit > ( L dhtip ,up ) 2 in

Computational and experimental results of total cut in the


backward moving plate (LBMP) of ERA sandwich 8/8/1

equation (13)).
307

OTEH2016

EXPLOSIVEREACTIVEARMORACTIONAGAINSTSHAPEDCHARGEJET

3.2 Explosive layer thickness effect


The influence of explosive layer thickness on 8/hex/1 ERA
effectiveness against 120 mm warhead was analyzed by
NERA code.
Estimated values of are obtained from the established
exponential correlation between coefficient and explosive
layer thickness hex [3].
The results for the case of ERA sandwiches 8/hex/1 are
presented in figure 8.

Figure 6. Penetration of 64 mm jet in RHA target


protected by ERA 8/8/1 and 3/15/1 at different angles

The agreement of results for 64 mm shaped charge jet,


presented in figure 6, is quite good. The calculated results
suggest the existence of an abrupt reduction of target
penetration to the precursor penetration at attack angles
exceeding the limit that depends on the ERA cassette
configuration. Some more experiments are needed,
particularly in the vicinity of the limit attack angle, to
confirm such specific shape of curves.
It is assumed in the calculations that coefficients,
determined by equation (25), remain constant for oblique
impacts of the jet. Although the effect of jet disturbance by
the explosive detonation products for oblique impacts is
somewhat underestimated in the calculations, the good
correlation with experimental data, especially for 120 mm
jet, justifies this assumption.
In that way the developed theoretical model and NERA
computer code are verified.

Figure 8. Penetration of 120 mm jet in RHA target as a


function of explosive layer thickness of ERA 8/hex/1

The ERA efficiency increases with increase of explosive


layer thickness, as expected. The total penetration is reduced
to the precursor penetration for explosive layers greater than
10 mm at attack angle 65.

3.3 Forward moving plate thickness effect

3.1 Backward moving plate thickness effect

The influence of forward moving plate thickness on ERA


effectiveness against 120 mm warhead for the case of ERA
sandwiches 8/8/hdp is presented in figure 9.

The influence of backward moving plate thickness on ERA


effectiveness against 120 mm warhead was analyzed by
NERA code calculations. Computational results for hBMP/8/1
ERA sandwiches are presented in figure 7.

Figure 9. Penetration of 120 mm jet in RHA target as a


function of FMP thickness of ERA 8/8/hFMP

Figure 7. Penetration of 120 mm jet in RHA target as a


function of BMP thickness of ERA hBMP/8/1

The strong influence of FMP thickness on ERA


effectiveness can be seen from figure 9, suggesting that the
plate thickness of 1 mm may be too small.

Incomplete interaction with the jet is calculated for ERA


sandwiches with thin BMP at high attack angles (1 mm
plate at angles 55, 65 and 3 mm plate at angle 65).
308

OTEH2016

EXPLOSIVEREACTIVEARMORACTIONAGAINSTSHAPEDCHARGEJET

3.4 Distance from the target effect

4. CONCLUSION

The influence of ERA distance from the target on its


effectiveness against 120 mm warhead for the case of ERA
sandwich 8/8/1 is presented in figure 10.

Influence factors on ERA effectiveness are of constant


interest for ERA and shaped charge warhead designers. In
this paper the theoretical model incorporated in NERA code
is developed. The model comprises the static and dynamic
actions of metal plates onset. The reduction of target
penetration caused by the jet interaction with high explosive
detonation products is accessed from experimental results
for the case of zero attack angle of the jet.

A slight increase of ERA 8/8/1 efficiency with increasing its


distance from the target is observed at attack angles 20 and
45. However, the total penetration is reduced to precursor
penetration by an increase of ERA distance from the target
(Z0) at high attack angles (Z0>70 mm at angle 55 and
Z0>30 mm at angle 65).

The influence of upper and down plate thickness, jet attack


angle and distance between ERA and main armor are
determined by NERA code and compared with experimental
investigations. Very good agreement between computational
and experimental results is observed. The developed code
enables optimisation of explosive reactive armor.

References
[1] Ugri,M.: Contribution to Theory of Interaction
Process of Explosive Reactive Armor and Shaped
Charge Projectile, (in Serbian), PhD Thesis, Faculty of
Mechanical Engineering, Belgrade, 1995.
[2] Jaramaz,S., Mickovi,D., Elek,P., Jaramaz,D.,
Mickovi,D.D.: A Model for Shaped Charge Warhead
Design, Strojniki vestnik J. Mech. Eng., 58(6)
(2012) 403-410.
[3] Mickovi,D., Jaramaz,S., Elek,P., Miloradovi,N.,
JaramazD.: A Model for Explosive Reactive Armor
Interaction with Shaped Charge jet, Propellants
Explos. Pyrotech., (41) (2016) 53-61.
[4] Held,M.: Discussion of the Experimental Findings from
the Initiation of Covered but Unconfined High
Explosive Charges with Shaped Charge Jets,
Propellants Explos. Pyrotech., (12) (1987) 167-174.
[5] Chanteret,P.Y.: Velocity of HE Driven Metal Plates
with Finite Lateral Dimensions, 12th Int. Symp. on
Ballistics, San Antonio, 1990, 369-378.
[6] Held,M., Schwartz,W.: The Importance of Jet Tip
Velocity for the Performance of Shaped Charges
against ERA, Propellants Explos. Pyrotech., (19)
(1994) 15-18.
[7] Mayseless,M.: Jet plate Interaction: the Precursor, 18th
Int. Symp. on Ballistics, San Antonio, 1999. 10191026.
[8] Walters,W., Flis,W., Chou,P.: A Survey of ShapedCharge Jet Penetration Models, Int. J. Imp. Eng., (7)
(1988) 307-325.
[9] Mayseless,M.: Reactive Armor-Simple Modeling, 25th
Int. Symp. on Ballistics, Beijing, 2010, 1554-1563.
[10] Yadav,H.S.: Interaction of a metallic jet with a Moving
Target, Propellants, Explos., Pyrotech., (13) (1988) 7479.
[11] Yadav,H.S., Kamat,P.V.: Effect of Moving Plate on JetPenetration, Propellants Explos. Pyrotech., (14) (1989)
12-18.
[12] Jaramaz,S., Mickovi,D., Elek,P.: Explosive Reactive
Armor: Theoretical and Experimental Studies, 27th Int.
Symp. on Ballistics, Freiburg, 2013, 1495-1505.

Figure 10. Penetration of 120 mm jet in RHA target as a


function of distance from the target of ERA 8/8/1

Experimental investigations given in [1] were focused on


ERA sandwich 8/8/1. However, the results of NERA code
calculations reveal the possibilities for an improvement of
ERA efficiency. The penetration of RHA target protected by
different ERA sandwiches as a function of the jet attack
angle is presented in figure 11.

Figure 11. Penetration of 120 mm jet in RHA target


protected by different ERA sandwiches as a function of

The best protective capabilities are obtained with ERA


8/8/5. However, an increase of sandwich mass of about 38
% is involved compared to the sandwich 8/8/1. The ERA
4/8/5 is believed to be the best solution. This configuration
enables considerable decrease of target penetration
compared to the ERA 8/8/1 with the same sandwich mass.
Further improvement could be achieved by an increase of
ERA distance from the target (Z0).

309

AMMUNITION SURPLUS - THREAT TO POSSESSORS DISPOSAL


METHODS: REVIEW OF DEMILITARIZATION TECHNOLOGIES
BLA MIHELI
Slovenian MOD/SAF, Ljubljana, blaz.mihelic@mors.si; blazmihelic@yahoo.com

Abstract: A surplus of conventional ammunition which is old, unserviceable, unstable or hazardous carries a great threat
for possessors. Numerous Unplanned Explosions at Munitions Sites (UEMS) are clear evidence of the global problem
related to Physical Security and Stockpile Management (PSSM). Statistical data of UEMS and their causes and
consequences are presented.
Different disposal methods are discussed; demilitarization technologies are reviewed with an emphasis on a safe, costeffective and environmentally-friendly approach. Examples of different types of ammunition, various types of energetic
materials, and reclamation technologies are shown, and practical experience in Resources, Recovery, and Recycling, the R3
approach of conventional ammunitions demilitarization, is shared.
Keywords: ammunition, demilitarization, disposal, explosion, R3.

1. INTRODUCTION
"A soldier can survive on the battlefield for months
without mail, weeks without food, days without water,
and minutes without air, but not one second without
ammo!"*
*(Author Unknown)
Ammunition is still the basic element which provides military
power to the armed forces. Ammunition and ammunition
elements are subject to deterioration, and ammunition gradually
becomes less reliable, less accurate, and less effective. It can
become a hazard for users and, in specific situations, even
hazardous for storage and handling [1].
Manufacturers usually guarantee a period of one, two or
three years for ammunition, depending on the contract. The
guaranteed shelf life is usually ten years if ammunition is
stored and handled properly. When the warranty period
expires, the suppliers are no longer responsible for the
ammunition. The shelf life can be extended based on an
extensive ammunition surveillance program. The
Ammunition Condition Code (ACC) defines the condition
of the ammunition. When ammunition becomes unstable
(contains unstable propellants) and becomes a hazard for the
user, or for any other reason, it needs to be withdrawn from
service and disposed of. There are many options for
disposal; the final decision depends on different technical,
economic and political factors [2].

A huge and unjustified number of Unplanned Explosions at


Munitions Sites (UEMS) is a clear indication of a massive,
global problem in the field of ammunition management.
Thanks to the Small Arms Survey (SAS), which
systematically monitors UEMS events, ammunition experts
have solid records about this alarming global problem. On
average, during the period from 1979 to 2015, around 20
UEMS occurred annually [9, 10, 11, 12,].

Picture 1: Number of UEMS around the globe (by SAS)


We do not know the actual causes of most UEMS. The
number of UEMS itself demonstrates the global problem
while the number of dead and injured and the material
damage to infrastructure is clear indication of the true depth
of the problem. A sympathetic detonation (SD, or SYDET),
also called a flash over, is a detonation, usually unintended,
of an explosive charge by a nearby explosion. Sympathetic
detonation is caused by a shock wave, or the impact of
primary or secondary blast fragments, which means an

Poor ammunition stockpile management, live unserviceable


ammunition in service, and hazardous ammunition create an
unnecessary danger to military and even civilian
infrastructure located in the surroundings of military storage
and maintenance facilities.

310

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

OTEH2016

Table 1: Data on the most deadly UEMS (source SAS).

almost instant transmission of the explosion from one stock


(bunker) to another. The initiating explosive is called the
donor charge, and the initiated one is known as the receptor
charge. In the case of a chain detonation, a receptor
explosive can become a donor one. On the other hand, a
consequential explosion occurs when a cook off
mechanism causes propagation; this process can last hours,
days or even weeks. Cooking off means setting off an
explosive by subjecting it to sustained heat caused by e.g.
flame spread, heat radiation, or impact of fragments.
Consequential explosions are characteristic of open stocks,
and, where stocks are stored inside a light-weight building,
they occur where there is no infrastructure, or where the
existing infrastructure cannot prevent fire propagation.
Many models are available to calculate the hazard radius to
prevent sympathetic detonation, but there is no model to
predict the hazard radius of consequential explosions. In the
case of a consequential explosion the hazard radius can be
equal to the maximum distance that the fragment or
projectile is ejected. Based on practical experience, some
rocket ammunition can be ejected from the crater to a
distance equal to the maximum range, if rockets are
regularly fired from the launcher.

Some examples of typical UEMSs are shown in the pictures


below:

Sympathetic detonation and consequential explosions are


the main reason for the high rate of mortality /injuries and
material damage to military or civilian infrastructure.

Picture 4: Example of UEMS in an open storage


magazine in Abadan, Turkmenistan; on the left is the
storage facility before the explosion, and on the right after
the explosion. (by Google Earth)
The explosion happened in 2011 and left 100 dead and 1320
injured. The reason for the serious consequences lack of
appropriate infrastructure.

Picture 5: Example of UEMS which occurred on 2 June


2011 near the city of Izhevsk. (by Google Earth)

Picture 2: Causes of UEMS as reported by SAS

The magazines were in a light-weight structure; on the left


is the storage facility before the event and on the right,
afterward. The explosion wiped out the entire ammunition
storage facility. Almost 20,000 inhabitants of the
surrounding area had to be evacuated, and 95 people were
injured in the blast. Windows were shattered up to 10km
away. The reason for the consequences lack of appropriate
infrastructure

There are many examples of poor safety management. In


one case, a local prison has been built in close proximity to
a ammunition storage facility. If there is a UEMS a high
mortality/injuries rate can be expected in the prison, caused
by an inappropriate planning and licensing procedure.

Picture 6: Consequences of UEMS in earth-covered


magazine in Afyonkarahisar, western Turkey.
(by Google Earth)

Picture 3: A civilian prison is located in close proximity


to a military ammunition storage facility. Location
Kozlovac near Tuzla - BiH. (by Google Earth)

Explosion of hand grenade left 25 soldiers dead in 2012. A


modern infrastructure prevented sympathetic and consequential
311

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

explosions from bunker to bunker. Poor ammunition


management system was observed at the facility.

OTEH2016

Although the life of an item of ammunition is often


determined by safety considerations related to energetic
materials, this may not always be the case. The
deterioration, due to ageing, of non-energetic components
such as rubber seals, electronic components and structural
materials can also limit the safe life of the ammunition by
affecting safety or performance parameters. It is important
that the whole system is considered when assessing lifelimiting factors for ammunition, not just the propellant or
other energetics.

Picture 7: Example of catastrophic consequences of


UEMS to civilian population of Brazzaville, Congo.
(by Google Earth)
The explosion happened in 2012 and left 500 dead and 3000
injured. It is clear evidence of neglect of basic safety
principles.

Picture 8: Consequences of UEMS on civilian


infrastructure (Cyprus) (picture - Google Earth)
The UEMS happened in the military base Evangelos
Florakis located near a power and desalination plant in
Cyprus. The incident occurred on 11 July 2011, after 98
containers of explosives had been stored for 2 years in the
sun. The resulting explosion killed 13 people, 12 of them
instantly, including the Commander of the Navy (Cyprus's
most senior naval officer), and the base commander; 62
people were injured. The explosion severely damaged
hundreds of nearby buildings, including all the buildings in
Zygi and the island's largest power station, responsible for
supplying over half of Cyprus' electricity. The Cypriot
Defense Minister and the Commander-in-Chief of the
Cypriot National Guard both resigned, angered by the
government's failure to dispose of the munitions, which had
been seized in 2009.

Picture 9: Process of growing crystals on the surface of


anti-tank mine TMA-3. (Photo taken by author)
According to the authors knowledge, this process has not
been published in open literature. The same process has
been noticed in soviet mines. Analysis confirmed the
content of the crystals pure TNT.

The correct way to avoid or to reduce the number of UEMS


and their consequences is based on a well-known and
globally-accepted principle, As Low As Reasonable
Achievable (ALARA), which means that the potential
hazard is eliminated or reduced to the minimum acceptable
level. In practice, unserviceable, or for any other reason
surplus, ammunition needs to be withdrawn from stock and
disposed of. The users the MoD/AF should only keep
serviceable and well-maintained ammunition for training
and for national defense purposes.

Picture 10: Main elements of ammunition life cycle

Surveillance and proof are essential for ensuring the safety,


reliability, and operational effectiveness of conventional
ammunition [1].

Ammunition shelf life is the time from manufacture to


disposal during which a product can be used. Shelf life
technique comprises test methods and knowledge of the
ageing of materials and products in various environments
and stresses.

In-service proof and the surveillance of ammunition are


undertaken to ensure that the ammunition meets the required
quality throughout its entire life cycle. Quality, from this
perspective, includes both the performance of the
ammunition during use and its safety and stability during
storage. The chemical, electrical, and mechanical properties
of ammunition change and degrade with time, leading to a
finite serviceable life for each munition. The accurate
assessment of munition life is of paramount importance in
terms of safety and cost [16].

If the ammunitions functional reliability, performance and


safety are compromised, such ammunition needs to be
withdrawn from service and disposed of.
Ammunition disposal methods
Designating ammunition for disposal does not necessarily
mean that it needs to be physically destroyed, but it does
312

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

OTEH2016

mean that it must be withdrawn from service using one or


another method.

and the munitions typically placed for disposal may be


difficult to sell if obsolete.

The optimal decision on the appropriate method for disposal


and demilitarization depends on many elements, such as:

safety of ammunition for transport and storage;

Demilitarization costs or even profit depend on many


factors. The most important of these are: type and quantity
of ammunition, available infrastructure and technology,
time available for disposal and degree of R3, and the local
market for scrap metal and recovered energetic materials.
The general observation is that gifting is cheaper than
demilitarization.

is the ammunition equipped with or without fuzes?

To increase use

is the ammunition coming from a stockpile or from the


range?

has it been recovered as a result of battle area clearance


(BAC) activities?

A logical solution to the disposal of ammunition is to


increase its use; however, this method does not find
application in practice due to the inflexibility of the military
system and the necessary coordination between different
departments. Safety concerns are also important if
ammunition is used for training purposes at the end of its
shelf life.

priority for disposal;

To change the purpose of the ammunition

type of ammunition;
condition and origin of ammunition;
quantity of ammunition;

is the ammunition military loot?


has the ammunition been affected by UEMS?

which methods of disposal are available;

Ammunition intended to be used for one design purpose can


be used for another purpose with or without modification. A
typical example is artillery ammunition, originally intended
to destroy enemy forces and infrastructure, which has
proven effective in triggering snow avalanches.

documentation and records;


is the ammunition subject to conventions such as the AntiPersonnel Landmine Ban Convention, the Convention on
Cluster Munitions, or the UN Firearms Protocol?
the local market for scrap metal and energetic materials
etc.

The warhead of the high explosive anti-tank HEAT hand


grenade, Bomba Runa Kumulativna (BRK) -79, has
proved to be a useful tool for the destruction of UXO,
especially UXOs from WWI (which have a very thick
shell), and armor piercing (AP) projectiles which contain a
less sensitive explosive charge known as Explosive D,
which requires a big donor charge for disposal by open
detonation.

Disposal methods are [1, 2, 15]:


to sell
to donate
to increase use
to change the purpose of the ammunition
to dump it into water
to be buried in landfill; this method is now forbidden but
many UXO from WWI and WWII are found on an almost
daily basis
To be destroyed/demilitarized by:
Open Burning/ Open Detonation - OB/OD
Industrial demilitarization.
To sell or donate

Picture 11: Top, warhead of HEAT hand grenade


Bomba Runa Kumulativna (BRK) -79, displayed as a
useful tool for the destruction of old ammunition, and
below, AP projectiles known to be very difficult to blow
up by ordinary cladding explosive charges.

It is common practice to put ammunition surplus on sale


before considering any other method of disposal. There are
some limitations: ammunition forbidden by various
conventions (anti-personnel mines, cluster ammunition etc.)
should not be put on the market at all, and the sale of
hazardous ammunition is also not allowed. Ammunition is
usually kept on sale for a period of one to three years. If the
auction fails, other disposal methods become justified.

Live ammunition can also be disassembled to make training


ammunition for display and training purposes.
Dump ammunition into water or bury in landfill
Methods of disposing of ammunition by dumping it into
water or burying it in soil have been forbidden in recent
times [2].

Traditionally, resale or gift was an option for the disposal of


excess munitions. Although this is still possible, concerns
about the proliferation of conventional weapons mitigates
against the use of this as a principal method. In particular,
the Wassenaar Agreement on Export Controls for
Conventional Arms and Dual-Use Goods and Technologies
places restrictions on the use of resale as a disposal method.

However, due to poor practice in the past, our generation is


facing many problems caused by buried or sunk
ammunition. There are local regulations that areas affected
by UXO pollution need to be searched and cleared of UXO
before the construction of highways, shopping centers, gas

Resale also has no application for unserviceable munitions,


313

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

or oil pipes, airports and so on can begin. Many UXOs are


still found in some areas in Europe on an almost daily basis,
mainly as a heritage from WWI and WWII.

OTEH2016

disposal are a lottery. The data given in the table below


show the differences in price offered by different
companies; the difference is not just a reasonable 10-20%,
but can reach magnitudes of 5x or even 10x and higher.

Great safety concerns are caused by chemical ammunition


destroyed after WWII by simply sinking it into the sea.

Table 3: Practical demilitarization costs

Ammunition demilitarization
Ammunition demilitarization/destruction can be carried out
by different types of organizations, such as commercial
companies, international organizations or military units. The
most realistic, internationally acceptable and practical
methods of disposal should therefore be destruction or
demilitarization.
Destruction is usually performed by a military EOD unit,
while demilitarization is carried out in demilitarization
facilities, mobile facilities or even by military units.
There are a wide range of technical factors that determine
the overall demilitarization or destruction plan, not least the
need for experienced and qualified personnel for
demilitarization, and potentially high funding requirements.
There is a global shortage of qualified personnel
experienced in developing ammunition demilitarization
facilities and programs.

Ammunition demilitarization is a complex, multi-step


process, depending on many factors. In general, ammunition
needs to be withdrawn from service, transported to
demilitarization/disposal sites, unpacked, and dealt with
according to technological procedures. The process needs to
be safe, cost-effective and environmentally-friendly. This
can be achieved by recovering, recycling and reusing as
many metals and energetic materials the ammunition
contains as possible.

Demilitarization projects are more of a campaign than a


continuous process. The projects are based on military needs
and the funds available for disposal/demilitarization. If a
contractor-run facility is designated only for
demilitarization, it is difficult for it to be competitive in
comparison to a facility which carries out industrial
explosives production and demilitarization projects in
parallel. Industrial explosives production is dependent on
the season, so autumn or winter is the best period for
demilitarization, when demand for industrial explosives is
less than in other periods of the year. At the same time, a
facility which also produces industrial explosives can utilize
the energetic materials for its own production, while a
demilitarization facility needs to pay for the destruction of
the energetic materials and their waste or are forced to sell
recovered explosives at a lower price.
The cost of the destruction of ammunition is probably the
most important factor, as the destruction of large quantities
of conventional ammunition is expensive. Little data is
publicly available on the costs of the demilitarization of
ammunition. An example of indicative costs that are
available for Western Europe is shown in the table below;
costs for less developed countries will be significantly less,
due to lower labor costs.
Table 2: Indicative demilitarization costs [1]
Indicative
Ammunition type:
demilitarization costs
(EUR/ton)
Small arms ammunition
101-529
Fuzes
237-1039
Propellant
856
Warheads (High Explosive)
564-610
Artillery
419-757
Pyrotechnics
1654

Picture 12: The main steps in the ammunition


demilitarization process [23, 24]
There are different types of ammunition; some are simple
and others more complex. Nevertheless, if ammunition is
subject to demilitarization, all the energetic materials and
elements need to be treated accordingly. If the project is
funded by international organizations then the energetic
materials should not be re-used for military purposes, and

Our experience has shown that the costs of destruction /


314

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

OTEH2016

all the materials that have been in contact with explosives


should be decontaminated, and some even mutilated before
selling to the market. Ammunition can be partially or totally
disassembled. Fuzes contain a lot of low value elements, so
they are not usually subject to total disassembly.

Picture 14: Huge numbers of empty boxes at a


demilitarization site is the first indication of poor
management
Disassembly and downsizing
Reverse assembly
Real demilitarization starts with the disassembly or
downsizing of the ammunition. The safest and most
practical method is disassembly by reverse assembly. The
number of operations depends on the type of ammunition.
The modern approach to ammunition design includes
requirements for disposal, which cannot be expected when
dealing with old ammunition produced in the middle of the
20th century. Different operations pose different hazards.
The process may include several operations:

Picture 13: The basic elements of ammunition which are


generated in the process of demilitarization and their subelements: energetics and inert.

The projectiles are separated from the cartridges


The fuzes are removed from the projectiles

Unpacking and handling with dunnage

The boosters are separated from the fuzes

Unpacking is the first important step in the demilitarization


of ammunition. It generates empty packaging materials,
which are very often impregnated by chemicals used for the
preservation of wood and protection against water damage
and termites. Untreated ammunition boxes are bulky, having
the same volume as full boxes. Transportation without
downsizing can significantly increase demilitarization costs.
In addition, boxes contain many metal parts like nails and
other fittings which cause difficulties in separating metals
from wood. When chemicals are not present, wood can be
used as a fuel or can be reprocessed for the production of
pellets. Hills of empty boxes lying somewhere around a
demilitarization facility is the first indication of poor
practice, but unfortunately this is very often seen. Wellorganized companies have, within the process, a crusher
machine and a magnetic system for separating metals from
wood; some companies have their own wooden pellets
production line. Dunnage which is contaminated can be
burnt in special incinerators, which need to be equipped
with a pollution abatement system (PAS). For such
incineration, materials companies usually charge an
additional fee.

The primers are removed from the cartridges


Driving/rotation bands are removed from the projectiles
Therefore, the order of disassembly, and a statement of
which operations should be performed remotely, are defined
in the Standard Operation Procedures (SOP).
Operational shields are required when the operation to be
performed generates an unacceptable risk of exposure.
Operational shields prevent the operator from being exposed
to blast overpressure not in excess of 15.9 kBar fragments to
energies of less than 79 Joules, and thermal fluxes to 12.56
kilowatts (kW) per m2. For operations involving intentional
initiation or detonation, operational shields should be
capable of limiting overpressure levels [6].
The determination of the maximum credible event for the
materials and operational scenario involved is an essential
part of the evaluation of the operator protection
requirements.
In addition to those operations where a risk assessment
shows a unacceptable level of risk, operational shields are
provided to separate the operator from the item being
processed during the following operations:

315

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

OTEH2016

In conclusion, mechanical downsizing is a suitable process


when it is carried out by remote control. High-pressure
hydro abrasive technology can be used for explosive
removal operations, but special care needs to be taken
because the abrasive can contaminate the recovered
explosive, which makes it more sensitive and hazardous
during further treatment.

1) Disassembly of loaded boosters, fuzes, primers, and


blank ammunition.
2) Removal of base plugs from loaded projectiles where
the design of the projectile is such that explosive
contamination of the base plug is not positively
precluded.
3) Removal of fuzes from pentolite loaded projectiles.
4) Disassembly of loaded bombs and warheads.
5) Removal of fuzes from hand grenades loaded with high
explosives.
6) Pull-apart of fixed ammunition, 20 mm and larger.
7) Disassembly of foreign ammunition or other
ammunition of uncertain design and condition.
8) Electrical testing of igniter circuitry of rockets, missiles, or
any other electrically-initiated explosive item.

Cryogenic Fracturing
This technique was developed for the demilitarization of
chemical munitions. The ammunition is cooled down in a
container filled with liquid nitrogen. The steel of the
projectiles becomes brittle due to the low temperature.
Subsequently the projectiles are transported to a hydraulic
press and fractured to recover the explosive or chemical
agent.
Cryo-fracturing is widely used in Europe for the commercial
demilitarization of small contained explosive units and
components. The freezing of the item desensitizes the
explosives, allowing them to be safely crushed and
subsequently processed in a rotary kiln. Many tens of
thousands of cluster munition bomblets have been disposed
of using this technique.

The order of the disassembly of ammunition elements is


important; there are different practices for which step should
be done first. For fixed ammunition the first logical step is
to separate the projectile from the cartridge. In some
countries there is a regulation that the most hazardous
operation should be done first (this includes fuzes and
primers). Simple machines perform only one operation at a
time, while modern, more sophisticated machines can
perform multi-step operations all at once. The optimal
solution depends on the quantity of ammunition; when the
quantity is relatively small, a simple one-step machine is a
better choice, while an automatic machine is better for a
large quantity for disposal. The best approach to choosing
the disassembly order is to perform a risk analysis of the
entire process and design it to reduce hazards to the
minimum possible level.

Explosive charge separation techniques


Common techniques designated for separating the
explosives content from the metal containers include:
melt out techniques,
high pressure water washout,
solvent washout.
Melt out techniques
Melt out techniques are widely used to remove the
explosives and fillings from filled ammunition in the molten
state. The most common example is TNT and explosive
compositions based on TNT, with content which allows
casting loading technology, such as TNT/RDX, TNT/HMX
or a mixture of TNT/RDX and aluminum with a content of
TNT of up to approximately 70% by weight. Melting is
more difficult for compositions having a high content of
non-meltable components. It is obvious that explosives
which have a melting temperature very close or equal to
their decomposition temperature, such as RDX and HMX,
cannot be melted out.

Mechanical downsizing
Mechanical downsizing makes use of different equipment
such as a lathe, hydraulic press, saw and hydro abrasive
cutter. The cutting tool is used to open the ammunition, to
separate fuzes from projectiles, to separate the cartridge
from projectiles, and so on.
In addition to the use of a lathe, downsizing can be achieved
by sawing or cutting the ammunition into smaller parts, if
proper precautions are taken. These techniques can be
applied all over the world. The application of these
techniques to the reverse assembly of ammunition may
create dangerous situations, as most explosive fillings are
sensitive to friction and impact.

Different melt-out technologies are available; they can be


grouped based on:
How heating is applied: direct, indirect or a combination

This method of downsizing can be selected if the safety of


the personnel involved is guaranteed. The use of remotely
controlled processes will be sufficient in most cases, and
even mandatory for safe operation.

Melting-out at normal or increased pressure (temperature


below or above 100C)
Type of heating media used: without media, hot dry air,
hot wet air, water, steam, various salty solutions, oil,
paraffin, fatty acids and even TNT itself.

Ammunition can also be sectioned by high pressure hydro


abrasive technology. The advantage of the hydro abrasive
cutting (HAC) technique is its flexibility, which allows for
the cutting of all ammunition from 40mm to large aircraft
bombs and torpedoes. The use of remote controlled
processes will be in most cases sufficient as well as
mandatory for safe processing. The HAC system is
especially suited for the cutting of ammunition containing
plastic bonded explosives [2].

TNT has a melting temperature of around 80C (depending


on purity) and is stable up to 160C; at this and higher
temperatures slow decomposition starts, and visible
exothermic decomposition begins at almost 240C. The
maximum advisable working temperature for TNT melting
operations is 120C. Melting is faster when the heating
medium is applied directly to the explosive charge, but on
316

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

the other hand the heating medium can contaminate the


TNT and its quality can deteriorate, which is usually
controlled by measuring the TNT solidification point. Hot
water as a working medium has a temperature of
approximately 95C in practice, so to increase temperature
and consequently speed up the melting process, melting
should be done under pressure. This can be performed in an
autoclave; a typical cross section is shown in the picture
below:

OTEH2016

heating media at all, so it is expected that the removed TNT


is not contaminated, and the process is environmentally
benign. Both methods require several safety measures, and
for this reason these technologies have not found application
in practice.
Another attempt to reduce the amount of pink water is to use
a salty water solution. Salt reduces the solubility of organic
compounds in water and increases heat capacity, which
makes a salty water solution more effective than technical
water. A closed water circle system is another possible
solution to eliminate pink water.
The most advanced heating media are oils, paraffin or fatty
acids. Using such media, higher temperatures can be
reached at normal pressure for the melting process, and the
heating media have a high heat capacity and do not mix with
TNT, which allows simple gravity separation of the liquid
phase. Due to the doubled density of liquid TNT in
comparison to the heating media, TNT sinks to the bottom
and can be separated. TNT can be further granulated, flaked
or cast into different models. This technology is fast, safe
and environmentally friendly. Liquid TNT can dissolve a
small amount of the heating media (several percent),
lowering TNTs solidification point and making the TNT
unsuitable for the production of artillery ammunition.

Picture 15: Autoclave melt-out system loaded with


artillery projectiles [24]

Molten TNT must be transferred into a suitable form such as


granules, flakes or small pieces. Flakes are a very common
form which can be easily manufactured. The disadvantage
of this system is that it requires cooling water, which is not
available in remote areas. The process is performed on a
rotary drum, as seen in the picture:
Liquid TNT

Steam

Picture 16: The shape of partially melted TNT removed


from artillery projectile. (Photo taken by author)
The central part remains unmelted. Such TNT is not suitable
for further processing without additional treatment such as
granulation or flaking.

Picture 17: The cooling of molten TNT on a rotary drum


is a common method used for flaking TNT in practice
The melting process requires additional heat for the phase
transformation from solid to liquid, so melting is generally a
slow process. If the shape of the ammunition/warhead
allows the cutting of the projectile or warhead into two
pieces (by sawing, cutting or braking), with each having a
smooth concave inner shape, then it is not necessary to melt
the entire charge, but just a small layer attached to the shell.
The charge will drop from the case and the removal process,
instead of lasting several hours, can be done in seconds.
This technology is mostly applicable for artillery projectiles
and anti-tank mines.

When the heating medium is water or steam, processed


water or condensed water, it becomes contaminated when in
contact with TNT. Such water is known as pink water.
Pink water is not actually pink in color when fresh; it is a
yellow transparent liquid. TNT is light sensitive, so when it
is exposed to sunlight it changes color from light yellow to
dark brown. Pink water is toxic and should not be released
into the environment. Different technologies are available
for treating it; if the quantity is small, simple evaporation,
followed by the incineration of the residue, is a practical
solution for disposing of pink water.

Melting is just on the surface/interface of the projectile


explosive; when the explosive melts, the explosive charge
drops through the table into a suitable box for further
processing.

To avoid generating pink water as a waste stream, attempts


have been made to heat the explosive charge without the
presence of water or steam. Such promising-looking
technologies achieve the melt out by using microwave or
electrical induction. Both technologies do not require
317

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

OTEH2016

Picture 21: On the left, molds for liquid TNT


crystallization; in the center, TNT in the process of
crystallization liquid TNT and TNT growing crystals;
on the right, already crystalized TNT a raw material
much in demand for industrial explosives.
(Photos taken by author)
There is big demand for TNT for industrial explosives, since
production of new military grade TNT has been reduced;
this is why recovered TNT (known as secondary TNT) can
easily find customers.

Picture 18: Table for melting the explosive charge from a


projectile. (Photos taken by the author, with the
permission of Spreewerk-Gospi, Croatia)

The melt out technique is also used for the demilitarization


of ammunition containing white phosphorus (WP). The
ammunition is drilled and immersed in a bath of warm
(50C) water. The phosphorus melts at 42C and must be
collected under water; this procedure is necessary because
of the violent reaction of phosphorus with the oxygen in the
air. WP in liquid form reacts instantly; while solid and wet it
takes some time, first for it to dry and then to react, but it is
less violent than WP in liquid form. The recovered WP has
a commercial value (an ingredient for Coca Cola and
fertilizers). Small quantities of WP munitions can be
disposed of by OD, but expert advice should be sought due
to the problems of environmental contamination. In some
countries is forbidden to use OD for the destruction of WP
ammunition. It is advisable to put the donor charge under
the WP ammunition; the reason for this it to ensure the
reaction of WP with air. If the donor charge is put on top, as
is normal procedure for HE ammunition, unreacted WP will
be injected into the soil, mud or sand. Ignition may then
occur at any time afterward, when WP comes into contact
with air; fire can occur even years after disposal.

The same method can be used for the removal of nonmeltable explosive charges (based on RDX or HMX, like AIX-1, A-IX-2) if the ammunition design allows (i.e. the
internal cavity is cylindrical in shape) and when the
explosive charge is bonded to the warhead with paraffin or
other similar easily meltable or thermoplastic materials.

Picture 19: Melting out non-meltable explosive charges


from projectiles: on the left, HE 57 mm projectiles (Soviet
origin); on the right, explosive charges removed from
projectiles. (Photos taken by author)
The explosive is A-IX-2 (mixture of phlegmatized RDX and
aluminum powder). Explosive A-IX-2 is stuck to the
projectile with wax, so melting the wax allows removal of
the explosive charge by gravity. Explosive A-IX-2 is
mixture of phlegmatized RDX and aluminum powder.

Picture 22: On the left, modern equipment for meltingout WP from smoke ammunition; on the right, artillery
projectile left in air. (Photos taken by the author with the
permission of Spreewerk-Gospi, Croatia)
The artillery projectile left in the air is part of the
technology, in order to allow droplets of WP remaining
inside the shell to react with the oxygen from the air. Visible
smoke is coming out from the shell. The capacity of melting
ammunition shown on the left is sufficient to demilitarize all
WP ammunition in our region.
Water jet washout

Picture 20: Equipment can use oils, paraffin or TNT as a


heating medium. This equipment performs both melting
and separation operation at the same time.
(Photo taken by author)

The principle of water jet washout of explosive fillings is


simply the use of a high pressure water jet. The water jet is
focused on the explosive filling by means of a rotating
318

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

OTEH2016

nozzle. With high pressure water washout it is possible to


remove all kinds of explosive fillings from the metal casing
of the ammunition. Wash out is especially suitable for the
removal of plastic bonded compositions (PBX) and other
non-melt cast explosives.

Picture 24: On the left, explosive A-IX-2 stored wet after


the wash-out operation the explosive is gray because it
still contains non-reacted aluminum powder. On the right
is the same explosive, this time white in color the
aluminum has already reacted with water and RDX is
start to decompose. (Photos taken by author)
Solvent Washout
This technique makes use of a solvent that will readily
dissolve the explosives. Since most explosives, like TNT
and RDX, are not (or are at least very weakly) soluble in
water, other solvents have to be chosen. Most explosives
will be dissolved in solvents such as methylene chloride,
methyl alcohol, acetone or toluene. It should be emphasized
that large amounts of solvent will be needed; large recovery
and storage facilities for the solvent will be essential.
Solvent washout enables the recycling of the explosives.
This technique is preferable for the reuse of high-value
military explosives such as RDX and HMX. High-value
explosives can be also recovered from various compositions
by solvent operation in combination with melt-out and
wash-out technology. Due to the need for large amounts of
combustible solvent, sophisticated equipment and process
control, and well trained personnel, this technology is not
suitable for mobile and portable equipment. The technology
is performed in specialized facilities, usually military
explosives production facilities such as PIB Bari.

Picture 23: The principle of removing explosives from


ammunition using a high pressure water jet;
US Pat 5737709
The basic characteristics of water jet washout technology
are:
The water jet will completely remove all kinds of
explosives;
Less pollution in the building, less TNT vapor means
more hygienic working conditions;
The water in the washout process can be recycled (no
waste water problem);
The explosives can be separated from the water for reuse;
The explosives can be transformed into slurry that can be
classified as UN class 4.1.
The water jet washout installation can be very effectively
combined with the hydro abrasive cutting system.

Although many of the techniques previously described


allow a degree of resource recovery and reuse (R3), a
number of techniques and applications concentrate on this
aspect.

There are also some disadvantages to the process:

Energy Recovery

Installation requires high pressure (at least 2000 bar);

Energy recovery involves using the energetic materials and


energetics in the waste streams to reclaim energy. This can
involve co-firing of slurries in boilers, rotary kilns or
fluidized bed combustors, or it can involve the recovery of
waste heat, such as a waste heat boiler to generate steam and
electricity in the pollution control system of an incinerator.
Energetic materials have much less internal energy content
than traditional combustible materials. The energy output is
much less than one would wish; therefore, emergency
recovery should not be the primary goal, but an alternative
solution when other options are not available.

Water is heated during the process;


The high temperature can melt wax which can saturate the
filter;
Some explosives contain aluminum powder which reacts
with water/humidity;
Hydrogen is released from the mixture, which can cause
fire or even an explosion;
When subject to high water pressure, explosives take a
muddy form and aluminum can react with water releasing
hydrogen, light and explosive gas. Aluminum oxide can
cause the decomposition of RDX.

Metal Scrap Recovery


One of the easiest forms of resource recovery is the reuse of
scrap metal from munitions casings. Essential to this,
however, is that the scrap metal is safe for reuse. Typically,
flashing is used to remove traces of energetic materials
from scrap metal before it is sold. This involves the heat
treatment of the metal at approximately 400C, and many
incinerators can be used as flashing furnaces.

For example, if explosive A-IX-2 in a mud form remains for


years after the wash-out operation, it can happen that the
residue also contains aluminum and RDX. There is a lot of
data about RDX hydrolysis in dilute systems, but none
about hydrolysis in concentrate form.

319

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

OTEH2016

Before scrap metal can be released to a commercial contractor,


most nations have Free From Explosives (FFE) requirements
to be met. These requirements are more codified in some
countries than in others. A particular example is the US
Department of the Army pamphlet on the Classification and
Remediation of Explosive Contamination. The pamphlet
defines four levels of contamination [6]:
Table 4: Levels of explosive contamination
Level of
contamination
1X

3X

5X
0

Remark
Articles which have only been subjected
to routine, after-use cleaning, and therefore substantial contamination remains.
Articles where surface contamination has
been removed, but sufficient contamination may remain in less obvious places to present an explosive safety hazard.
Articles where there is not enough remainning contamination to present an explosive safety hazard.
Articles that have never been
contaminated.

Picture 25: Black powder, because of its high sensitivity,


needs to be disposed of in situ by dropping the powder
charge into water. (Photo taken by author)
Chemical Conversion
Waste streams for demilitarization can be recycled by
chemical conversion into products that have market value.
The application is dependent on the particular material
being recycled; however, the goal is to turn the relatively
inexpensive demilitarization product into a high value
product. Some examples are:

For demilitarization, the 3X and 5X levels are relevant,


defining the efficiency of the demilitarization method, and
these definitions are often found in descriptions of
demilitarization techniques. Under US regulations, 3X
articles cannot be sold to the general public, but 5X articles
may be. There is further guidance in the pamphlet on
assessing the level of contamination.

White Phosphorus (WP)


WP is an incendiary found in smoke ammunition, spotting
charges in training ammunition and in fuzes for NAPALM
bombs. It is hazardous and difficult to dispose of. In some
developed countries, there is a ban in place against the open
burning of smoke munitions.

Another aspect to be considered in the recycling of metal


scrap is the mutilation of vital parts. A demilitarized
munition may still look, to all intents and purposes, like a
real munition. If these elements (like empty projectiles and
empty hand grenades) are sent for scrap, then there is the
potential for unwarranted calls to Explosive Ordnance
Disposal teams if they are found. There is a lack of
technical standards for the mutilation of munitions scrap.

WP can be directly converted into phosphoric acid by the


incineration of white phosphorus in air and the quenching of
phosphoric pentoxide in a water tower. More often WP is
processed in two steps: the first is the melting out from
ammunition, an operation performed at ammunition
demilitarization facilities; the recovered WP is then shipped
to chemical plants where it can be converted into phosphoric
acid or into red phosphorous (RP). Phosphoric acid is a
commercially useful product (in particular in soft drinks like
Coca Cola) and is extensively used as a raw material in the
chemical industry. RP is in use as a raw material for making
matches and pyrotechnics.

Recycling energetic materials as fertilizer


Energetic materials generally have a high nitrogen content,
which can be usefully recycled as fertilizer or components
of fertilizer. A good illustration of this is the dual use of
ammonium nitrate as a component of a commercial
explosive (Ammonium Nitrate & Fuel Oil or ANFO) and as
a fertilizer. For military explosives and propellants, some
form of treatment is required before they are suitable to use
as fertilizer. The most common form of treatment for this
application is hydrolysis. Many formulations of rocket
propellants contain heavy metal as a catalyst. Heavy metals
are toxic to the environment; this is the reason for not using
propellants and explosives as fertilizers, but for finding
other, more viable, methods.

WP + O2 (from air) P2O5 + 3H2O 2 H3PO4


WP

RP

Catalyst and elevated temperature

Facilities for the recovery of WP from ammunitions exist in


Europe (Maxam/Expal has plants in Spain and Bulgaria,
Alsetex have a modern plant in France, ISL has plants in
Germany and Spreewerk in Croatia).

Such a solution can be used as fertilizer; the proper dilution


rate with water needs to be taken into account with.

320

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

OTEH2016

It is interesting that even today, many research studies are


going on around the globe on how to optimize utilizing
recovered TNT and propellant; while TNT is not in
question, gun propellants are rather burned than utilized for
industrial explosives. In former Yugoslavia in the 1980s,
almost all propellants were utilized for the production of
industrial explosives. Even though the technology for
processing gun propellants exists, a huge amount of
propellant is destroyed by open burning. Based on a rough
estimation, at least 1 million EUR was lost because
propellants were burnt rather than utilized on the territory of
former Yugoslavia after the civil wars.

Conversion of Ammonium Picrate


The US has large quantities of ammonium picrate, an
explosive known as Explosive D, which is loaded in
obsolete Armor Piercing (AP) projectiles. NSWC Crane and
Gradient Technologies have developed a process for
converting ammonium picrate to picric acid. Their patented
process involves the use of concentrated sulfuric acid and
toluene as an organic solvent. Picric acid has commercial
applications, particularly in dye manufacture.
The Lawrence Livermore National Laboratory (LLNL) has
investigated methods of synthesizing the explosive TATB
from a diverse feedstock including ammonium picrate and
picric acid. TATB is a particularly expensive explosive to
synthesize, so the value added through this process is
considerable. TATB can be used as an explosive in nuclear
weapons, and is the main component or ingredient in Low
Vulnerability Ammunition (LOVA).
Acid digestion or solvolysis
RDX and HMX can be recovered from PBXs with a process
known as acid digestion or solvolysis. The process consists of
two phases: in the first, the polymeric binder is solubilized
using nitric acid for HMX, or a mixture of water, calcium
chloride and a surfactant (wetting agent) for RDX. Elevated
temperatures in the order of 70-80C are required during this
phase. In the second phase, the energetic material and the
binder are separated by centrifuge. This process has been
patented by TPL Inc. and has been shown to provide very pure
HMX. A 68kg per day sub-scale plant has been tested, and the
HMX is considered to offer a cost advantage over
commercially-manufactured HMX, but the RDX is considered
to be uneconomic to recover at this point in time.

Picture 27: On the left, artillery propellant set-up for fire,


and on the right, burning of artillery propellant. (Photos
taken by author)
Propellants can be utilized in industrial explosives as
ingredients in different types of industrial explosives such as
ANFO, slurry and emulsions, or they can be mixed with oil
and used as such.

This process also enables the recovery of the acid/binder


solution from HMX, which, when neutralized with
ammonium hydroxide and dried, generates Ammonium
Nitrate/Polymeric Fuel, which is viewed as a possible
commercial blasting explosive. NEXPLO Bofors (now
EURENCO Bofors) have also patented a solvent-based
process for extracting pure RDX or HMX from explosives.
Reuse as Commercial Explosive
Energetic materials removed by techniques such as washout
and melt out can be reused as commercial explosives rather
than destroyed. Commercial applications for explosives vary
from mine blasting, which requires cheap and easily
transportable explosives such as ANFO, to more demanding
applications which require military style explosives
[23,24,28,31].

Picture 28: Artillery propellants can be used as industrial


explosives even without any additives; usually they are
mixed with ammonium nitrate and other ingredients to
optimize safety and explosive output. (Photo taken by
author)
Higher value commercial explosives can be formulated
using RDX/TNT based munitions.
Nano diamond production
Detonation nano diamond (DND), also known as ultra
dispersed diamond (UDD), is a diamond that originates in a
detonation. When an oxygen-deficient explosive mixture of
TNT/RDX is detonated in a closed chamber, diamond particles
with a diameter of ca. 5 nm are formed at the front of the
detonation wave in a span of several microseconds [32].

Picture 26: On the left, the former KIK Kamnik demil


facility in Skopice, where explosives and propellants were
removed from ammunition; on the right, the KIK Kamnik
former explosives production facility used recovered TNT
and propellants for manufacturing industrial explosives in
the 1980s. (by Google Earth)

In 2012 the SKN Company was awarded the Ig Nobel Peace


Prize for converting old Russian ammunition into nanodiamonds.

321

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

The Nano Carbon Research Institute in Japan has


demonstrated that nano-materials can shuttle chemotherapy
drugs to cells without producing the negative effects of
today's delivery agents.

OTEH2016

a potential chance for regional companies to export to many


other countries around the globe which are still demanding
equipment and know-how for disposal of ammunition
surplus.

Clusters of the nano-diamonds surround the drugs, ensuring


that they remain separated from healthy cells and preventing
unnecessary damage; on reaching the intended targets, the
drugs are released into the cancer cells. The leftover
diamonds, hundreds of thousands of which could fit into the
eye of a needle, do not induce inflammation in cells once
they have done their job.

References
[1] International Ammunition Technical Guideline - IATG
10.10; Demilitarization and destruction of conventional
ammunition, UN ODA 2015
[2] OSCE Handbook of Best Practices on Conventional
Ammunition, 2008
[3] Application of Demilitarized Gun and Rocket
propellant in Commercial Explosives NATO Science
Series2000, Edited by Oldrich Machacek;1990
[4] U.S. Army Defense Ammunition Center: United States
Munitions Demilitarization, Conventional Ammunition
Demilitarization Capabilities
[5] Review Of Demilitarisation And Disposal Techniques
For Munitions And Related MATERIALS
http://www.rasrinitiative.org/pdfs/MSIAC-2006.pdf
[6] DoD Ammunition and Explosives Safety Standards:
General Explosives Safety Information and
Requirements DoDM 6055.09-M-V1, February 29,
2008
[7] RTO TECHNICAL REPORT: TR-AVT-115;
Environmental Impact of Munition and Propellant
Disposal
http://underwatermunitions.org/EnvironmentalImpact_
of_Munition_and_propellant_disposal_-_NATO.pdf
[8] Mr. Gary Carney: Defense Ammunition Center;
http://www.dtic.mil/ndia/2008psa_peo/carneyday2.pdf
[9] SAS: Unplanned Explosions at Munitions Sites
(UEMS): Excess Stockpiles as Liabilities rather than
Assets
http://www.smallarmssurvey.org/fileadmin/docs/QHandbooks/HB-03-UEMS/SAS-HB03-UEMSHandbook-full.pdf
[10] SAS: Conventional Ammunition in Surplus; Edited by
James Bevan, January 2008; Co-published with BICC,
FAS, GRIP, and SEESAC with support from the
German Federal Foreign Office
[11] http://www.smallarmssurvey.org/fileadmin/docs/DBook-series/book-05-Conventional-Ammo/SASConventional-Ammunition-in-Surplus-Book.pdf
[12] SAS:
The
UEMS
Incident
Reporting;
http://www.smallarmssurvey.org/fileadmin/docs/HResearch_Notes/SAS-Research-Note-40.pdf
[13] Unplanned Explosions at Munitions Sites Updated 13
September 2016 (data covering January 1979 to June
2016) http://www.smallarmssurvey.org/weapons-andmarkets/stockpiles/unplanned-explosions-at-munitionssites.html
[14] UN Department for Disarmament Affairs A
DESTRUCTION HANDBOOK small arms, light
weapons,
ammunition
and
explosives
http://www.smallarmssurvey.org/fileadmin/docs/LExternal-publications/2001/2001-UNDDADestruction-Handbook-Small-Arms.pdf

Picture 29: Nano Diamond Detonation Soot is produced


by the detonation of explosives, containing about 54%
nano-diamond, and about 40% nano-graphite.

CONCLUSION
Numerous unplanned explosions at munitions sites (UEMS)
have drawn increasing public attention to the dangers
associated with the inappropriate management and storage
of explosive materials.
A practical solution to reduce the number of UEMS and
their catastrophic consequences is to remove hazardous, old
and for any other reason non-perspective ammunition from
service and dispose of it.
There are several options for disposing of ammunition from
service; disposal does not necessarily mean demilitarization
and destruction.
Demilitarization and destruction use two main techniques:
open detonation & open burning OD/OB, and industrial
demilitarization.
Industrial demilitarization is a modern approach which
allows a high level of R3 (reuse, recycle and recover) in
taking energetic and non-energetic materials from
ammunition; this makes industrial demilitarization a safe,
cost-effective and environmentally-friendly method.
The choice of the most suitable technology for the
demilitarization of ammunition will heavily depend on the
local situation. In practice it will not be just one technology,
but a combination of different technologies.
Knowledge and expertise in the field of industrial
demilitarization in our region is at a high level. This can be
322

AMMUNITIONSURPLUSTHREATTOPOSSESSORSDISPOSALMETHODS:REVIEWOFDEMILITARIZATIONTECHNOLOGIES

[15] TM 9-1300-214; Military Explosives 1984


http://militarynewbie.com/wpcontent/uploads/2013/11/TM-9-1300-214-MilitaryExplosives.pdf
[16] Physical Security and Stockpile management of Arms,
Ammunition and Explosives, RACVIAC, Internal
training material
[17] Shelf-life design work, DEFENCE STANDARD FSD
0223, 2004
[18] Ammunition Safety Manual; Defence Material
Administration, Svenskt Tryck Stockholm, 1990
[19] Petar V. Maksimovi: Tehnologija eksplozivnih
materija, Beograd 1972
[20] Petar. V Maksimovi: Eksplozivne materije, Beograd
1985
[21] Prof.dr. Momilo Milinovi: Struktura vazduhoplovnih
raketnih ubojnih sredstava u smislu bezbednosti u toku
skladitenja, TRZ Kragujevac, 2013
[22] Slavia Stojiljkovi; Osnove dijagnostikovanja i
praenja stanja kvaliteta municije, TRZ Kragujevac,
2013
[23] Slavia Stojiljkovi: laboratorijska ispitivanja raketnih
goriva, TRZ Kragujevac, 2013
[24] Bla Miheli: Energetske materije Eksplozivi, baruti i
pirotehnike smee, TRZ Kragujevac, 2013
[25] Bla Miheli: Energetske i druge opasne materije u
municiji ratnog vazduhoplovstva i protivazdune
odbrane i njihovo unitavanje, TRZ Kragujevac, 2013

OTEH2016

[26] Milan Pavlovi: Demilitarizacija municje, TRZ


Kragujevac, 2013
[27] Jasmine Kovaevi: Odravanje municije, TRZ
Kragujevac, 2013
[28] Delaboracija municije; SSNO, 1976
[29] .., .., ..,

. ..
.. . - .: , 1998.
319
[30]
, 1973
[31]
,
,
1986
[32]
; , ; ,
; , .
". " 2007
[33] Nano
Diamond
Detonation
Soot
http://nanozl.en.ecplaza.net/nano-diamond-detonationsoot--2018722156210.htmlhttps://en.wikipedia.org/wiki/Detonation
_nanodiamond
[34] High pressure washout of explosives agents US Pat
#5737709; https://www.google.si/patents/US5737709

323

SHOCKWAVE OVERPRESSURE OF PROPELLANT GASES


AROUND THE MORTAR
MIODRAG LISOV
Military Technical Institution, Ministry of Defence, Serbia
SLOBODAN JARAMAZ
University of Belgrade, Faculty of Mechanical Engineering, Serbia, sjaramaz@mas.bg.ac.rs
MIRKO KOZI
Military Technical Institution, Ministry of Defence, Serbia, mkozic@open.telekom.rs
NOVICA RISTOVI
Military Technical Institution, Ministry of Defence, Serbia

Abstract: This paper resulted from many years of research of the overpressure of the shockwave of the powder gases,
which occurs during firing from the mortar. The cause for these occurences is a sudden flow of powder gases from the
weapon barrel and its expanding in the undisturbed environment. In this paper is shown influence of propellant charge
on overpressure intensity of shockwave produced by powder gases nearby mortar. The research comprised modelling
and computation of the overpressure field, around the weapon, because of determining its intensity and distribution in
space and time. With the aim of the real description of the mentioned occurences the theoretical part was given, and
then the numerical modelling of the instantaneous flow of the powder gases from the motrar barrel has been coducted.
Experimental results are achieved by the firing experiments from the 120mm mortar. Computation and experimental
results are given in the form of a chart of the barrel pressure change and overpressure of powder gases, at the
characteristic measuring points around mortar.
Keywords: gas dynamics, propellant charge, overpressure, powder gases, interior ballistics, mortar.

1. INTRODUCTION

2. THEORETICAL BASIS

The need for the increase of the maximum range, which is


mostly achieved by better aerodynamic characteristics of
the missile and by the construction changes at the
weapon, also requires the use of greater, actually more
powerful powder charges. A more powerful powder gases
as a consequence have an increase of the operation
pressure in the weapon barrel and the greater production
of the powder gases, which leads to the increasing the
overpressure around the weapon. As a consequence there
is the occurrence of the greater overpressure around the
weapon, which is particularly emphasized at the mortar,
where the firing operator (gunner) is in the vicinity of the
weapon during the procedure of loading and firing [1].
Everything that is mentioned impose the need of defining
influence of weight increas of propellant charge on
overpressure intensity of shockwave nearby mortar.

With the aim of resolving complex gas dynamics


phenomena Computational fluid dynamics (CFD) with
finite volume method (FVM) is used in order to be able to
use unstructured mash and simple use of Neumanns
boundary condition.
In theoretical basis the following equations are used:
conservation of mass

+ ( vi ),i = 0
t

(1)

conservation of momentum

Theoretical basis is set up and mathematical modelling is


done as well as computation of characteristic gas dynamic
values [3].

v j
t

+ v j ,i vi + p, j ij ,i + Fj = 0

(2)

conservation of energy

For the verification of the applied numerical model [4] the


experimental measuring method was used.

324

e
e,i vi + pvi.i ij v j ,i + qi ,i r = 0
t

(3)

OTEH2016

SHOCKWAVEOVERPRESSUREOFPROPELLANTGASESAROUNDTHEMORTAR

where is the fluid density, vi components of the


velocity vector components, - internal energy per unit
mass, Fi components of body force vector volume
forces vector components, p - pressure, i - viscous stress
tensor, - temperature, qi - heat flux and r - heat supply
per unit mass.
Navier-Stokes set of equations
conservation form [5], [6]:
R=

in

the

U Fi Gi
+
+
B
t xi xi

Used dynamic mesh adaption [11] is based on the


pressure gradient, because of its very high value near the
shock.
In Fig. 1 is given scheme with characteristic measuring
places that correspond with actual possition of crew
members.

so-called

(4)

where: U is conservative flow variables vector per unit


volume, F is convective flux variables vector, G is
diffusion flux variables vector and B
is source terms (mass, momentum and energy).
In the process of numerical simulation, the exit cross
section, on the muzzle, is used equation of pressure
change with time defined as in Eq. (5):
p = p p e Kt

(5)

where K is coefficient for barrel, p is pressure, pp is the


initial pressure and t is time.
Figure 1. Measuring points scheme

Temperature change is defined in equations (6) and (7):


mr

1 T 1 p
=
T t
p t

Results of the numerical calculations for the overpressure


versus time, calculated for characteristic measuring points
MM1, MM2 and MM3, for 120 mm mortar firing are
given in Fig. 2.

(6)

p k
T = Tp
p p

(7)

3. GAS DYNAMICS CALCULATIONS


Numerical simulation provides preconditions which, by
computation, provide reaching the characteristic values of
the gas flow which sets free in the firing process. The
belonging physical appearances are simulated by the
proposed mathematical model and the condition of the gas
phase of the combustion products and the surrounding air
may be monitored in space and time. This way the shock
wave front, which moves through the space causing the
change of the condition in the undisturbed environment, is
clearly noticed.

Figure 2. Gas dynamic computation (2D),

In Fig. 3 are shown visuelized results of numerical


calculations for gas flow characteristics captured in
characteristic moment.

Numerical computation was conducted on the basis of the


proposed mathematical model and realized by software
ANSYS FLUENT software where the averaged NavierStokes equations are solved [9].

325

OTEH2016

SHOCKWAVEOVERPRESSUREOFPROPELLANTGASESAROUNDTHEMORTAR

a) Gas density

b) Mach number

c) Gas pressure

d) Concentration of propellant gases

Figure 3. Results of gaseous dynamics calculations

4. INTERIOR BALLISTICS CALCULATIONS


Interior ballistics calculations are done for the case of
firing 120 mm lightweight high explosive shell (LTF) and
propellant charges O+6 (ignition charge + 6 incerment
charges) and O+7 (ignition charge + 7 incerment charges)
and new extended range 120 mm mortar shell with
propellant charge O+10 (ignition charge + 10 incerment
charges).
Interior ballistics calculations are done for three presented
cases using software which calculates by simplified
Serebryakov method. Pressure versus time curves (p t),
i.e. pressure calculations for three presented examples are
given in Fig. 4.
For lightweight shell 120 mm LTF was used standard
propellant charge (O+6) and charge for safety check at
extreme barrel preassure (O+7), with doublebased
propellant. For extended range mortar shell 120 mm
XM95 was used new propellant charge which is also
doublebased but with different characteristics.

Figure 4. Comparative overview of pressure versus barrel


length for firing 120 mm

Input data for interior ballistics calculations are given in


Table 1.

326

OTEH2016

SHOCKWAVEOVERPRESSUREOFPROPELLANTGASESAROUNDTHEMORTAR

Tabe 1. Input data for interior ballistics calculations


Mortar 120 mm

95

95

95

Shell 120 mm

LTF

LTF

XM95

Propellant charge

(+6)

(+7)

(+10)

Shell weight

12,6 kg

12,6 kg 15,6 kg

Weight of ignition charge

0,032 kg

0,032 kg 0,040 kg

Weight of increment charge

0,456 kg

0,532 kg 0,810 kg

5. EXPERIMENTAL RESULTS
Experiment was realized by firing from the 120 mm
mortar. The overpressure values at the characteristic
measuring points around the weapon barrel were
measured with the PCB sensors, as shown in Fig. 5. Used
sensors are PCB Piezotronics mod. 137A24 with
measuring range of up to 17,27 bar and sensitivity of
20mV/psi.

Figure 5. Results overview for measured overpressure


around weapon for firing 120 mm shells LTF and XM95

The results of experiment, measured during the firing


several groups of shells are given in Table 2.
Tabe 2. Results overview for firing mortar shells 120 mm
LTF and XM95
Vo

Pm

p MM1

p MM2

p MM3

p MM4

(m/s)

(bar)

(bar)

(bar)

(bar)

(bar)

120 mm LTF doublebased propellant 456g (0+6)


322,0

846

0,18

0,12

0,06

0,17

120 mm XM95 doublebased propellant 567g (0+7)


325,8

836

0,20

0,17

0,08

0,24

120 mm XM95 doublebased propellant 810g (0+10)


399,2

1401

0,30

0,26

0,10

0,32

Figure 6. Display of mortar crew during the firing


(measuring point MM4 is actual gunner's position)

Figure 7. Display of measuring points for overpressure measuring around 120 mm mortar

and presented in Fig. 8. In the available literature [14],


there is a fact stated that the model of gas dynamic
computation (2D) with the cylindrical shockwave is of a
greater overpressure value in comparison to the sphere
shockwave which has been confirmed in this case.

6. RESULT ANALYSES
Generally, the results obtained with a gas dynamic
computation (2D), are fully satisfactory, as confirmed by
the experimental results that are measured during firing
327

SHOCKWAVEOVERPRESSUREOFPROPELLANTGASESAROUNDTHEMORTAR

OTEH2016

pp.260-266.
[2] Taylor,T.D.: (1970). Calculation of Muzzle Blast
Flow Fields. PA-R-4155, Picatinny Arsenal, Dover,
N. J.
[3] Roe,P.L.: (1981). Approximate Riemann solvers,
parameter vector, and difference schemes. Journal of
Computational physics, vol. 43, no. 2, p. 357-372,
DOI: 10.1016/0021-9991(81)90128-5.
[4] Kozi,M.: (2013). Application of Computational
Fluid Dynamics in Aeronautics. Military Technical
Institute, Belgrade.
[5] Beam,R.M., Warming,R.F.: (1978). An Implicit
Factored Scheme for the Compressible NavierStokes Equations. AIAA Journal, vol. 16, no. 4, p.
393-402, DOI: 10.2514/3.60901.
[6] Siclari,M., Jameson,A.: (1989). A Multigrid Finite
Volume Method for Solving the Euler and NavierStokes Equation for the High Speed Flows,
Aerospace
Sciences
Proceedings
of
27th
Meeting,AIAA-89-0283, p.1-17.
[7] Chung,T.J.: (1978). Finite element analysis in fluid
dynamics. McGraw-Hill International Book Co.
[8] Chung,T.J.: (2002). Computational Fluid Dynamics.
Cambridge University Press, Cambridge, UK.
[9] Clear,D.L., Doxbeck,M.: (2008). Development of a 3D Blast Overpressure Modeling Capability Utilizing
Fluent. Proceedings of International ANSYS
Conference, p. 1-7.
Montanari,F.,
Cler,D.L.,
[10] Kurbatskiii,K.A.,
Doxbeck,M.: (2007). Numerical Blast Wave
Identification and Tracking Using Solution-Based
Mesh Adaptation Approach. AIAA Paper 20074188. Proceedings of 18th AIAA Computational
Fluid Dynamics Conference, Fluid Dynamics and
Co-located Conferences, p. 1029-1043, DOI:
10.2514/6.2007-4188.
[11] Lo,S.H.: (2015). Finite Element Mesh Generation.
CRC Press, Teylor & Francis Group
[12] Cheng,S.W., Dey,T.K., Shewchuk,J.R.: (2012).
Delaunay Mesh Generation, CRC Press, Florida
[13] Dannenhoffer,J.F.: (1991). A comparison of adaptivegrid redistribution and embedding for steady transonic
flow. Numerical Methods in Engineering, Vol.32, No.4,
pp.653-663, DOI: 10.1002/nme.1620320403.
[14] Page,N.W., Mckelvie,P.I.: (1977). Shock Waves
Generated by Spark Discharge. Proceedings of 6th
Australian Hydraulics and Fluid Mechanic
Conference, pp. 221-224

Figure 8. Overpressure computation and experimental


results, 120 mm M95 mortar with LTF M62P1 and (O+7)
propellant charge

7. CONCLUSION
On the basis of the conducted analysis, based on the
computation results, experimental testing with the 120
mm mortar system and applied construction solutions can
be concluded the following:
gas dynamics computations, with 2D numerical
simulation, showed that the proposed mathematical
model with the applied software using the adaptively
generated mesh determines the position and shockwave
strength accurately and precisely enough, as shawn on
Fig. 8,
proposed mathematical model and applied numerical
simulation give 3D presentation of gaseous dynamics
parameters of gas flow (velocity, density, pressure and
concentration) which may be observed in different time
intervals and different points in space around weapon,
with propellant charge O+10, which provide maximum
velocity of 400 m/s, overpressure at the gunners
position is p = 0,32 bar that requires use of protective
equipment and impose the need of reducing the
overpressure, because overpressure of 0,30 bar and
more is considered as the one that may cause issues and
disturbance of mortar crew.
References

[1] Ronevi,R.,
Jezdimirovi,M.,
Lisov,M.,
Brkuanin,A. (2012). Future Mortars Systems.
Proceedings of 5th International Scientific
Conference on Defensive Technologies OTEH 2012,

328

SECTION V

Integrated Sensor Systems


and Robotic Systems

CHAIRMAN
Branko Livada, PhD
Ivan Pokrajac, PhD

ACOUSTIC SOURCE LOCALIZATION USING A DISCRETE


PROBABILITY DENSITY METHOD FOR POSITION DETERMINATION
IVAN POKRAJAC
Department for Electronic System, Military Technical Institute, Belgrade, ivan.pokrajac@vs.rs
NADICA KOZI
Department for Electronic System, Military Technical Institute, Belgrade, nadica.kozic@gmail.com
PREDRAG OKILJEVI
Department for Electronic System, Military Technical Institute, Belgrade, predrag.okiljevic@vti.vs.rs
MIODRAG VRAAR
Department for Armament, Military Technical Institute, Belgrade, vracarmiodrag@mts.rs
BRUSIN RADIANA
Department for Electronic System, Military Technical Institute, Belgrade

Abstract: The localization of various acoustic sources in a battlefield (weapon rounds, mortars, rockets, mines,
improvised explosive devices, vehicle-borne improvised explosive devices and airborne vehicles) is a hot research
topic. In acoustic source localization systems multiple sensors (microphones or microphone arrays), placed at known
positions, are used to detect signals emitted from the source. In this paper, Discrete Probability Density (DPD) method,
as a method for position determination, has been used to estimate location of acoustic sources such as artillery
weapons. The DPD method provides position estimation using Time Difference of Arrival (TDOA) data. TDOA presents
relative time difference of arrival of the sound signal between pairs of sensors. The results of the proposed method have
been validated using real data from the field experiment hold under the technical panel NATO STO SET-189
Battlefield Acoustics, Multi-modal Sensing, and Networked Sensing for ISR Applications.
Keywords: acoustic emission, localization, DPD, TDOA.
the signal on pairs of sensors and based on these
measurements, in the second step, the location is
estimated [5].

1. INTRODUCTION
The localization of various acoustic sources in a
battlefield (weapon rounds, mortars, rockets, mines,
improvised explosive devices, vehicle-borne improvised
explosive devices and airborne vehicles) is a hot research
topic. In open literature there are many papers in which
are presented methods for acoustic source localization.
Some of them are based on cross-correlation techniques
and some on near-field beam-forming that can provide
good localization accuracy even in noisy and reverberant
environments [1-5]. All of these algorithms utilize the
sound recordings of the sensors to calculate the most
probable source location. However, data transfer from the
sensors to the processing unit requires relatively high
bandwidth, which can be a serious limitation for practical
implementation. Another possible bottleneck in such
systems is the huge amount of calculations to be
performed, limiting the response time and thus the
application area of the system [5].

In this paper we propose localization method that utilizes


a distributed two-stage approach. The proposed method
for acoustic source localization using Discrete Probability
Density (DPD) method is based on using distributed twostage approach. The first step of the proposed method is
performed at the sensors where Time of Arrival (TOA) is
determined (certain points of the acoustic signals are
marked). These TOAs from all sensors are recorded and
sent to the central processing unit. In the second step the
central processing unit calculates Time Difference of
Arrival (TDOA) between sensors to obtain the position of
the signal source using the DPD method.
The results shown in this work are obtained using real
data from the field experiment hold under the technical
panel NATO STO SET-189 Battlefield Acoustics, Multimodal Sensing, and Networked Sensing for ISR
Applications.

In order to overcome limitations of the one-step


techniques for position determination, two-step
techniques can be used for acoustic source localization. In
the two-step techniques, in the first step are measured the
differences between times of arrivals or angles of arrivals

This paper consists of five parts. Introduction is given in


Section I. Concept of DPD method is given in Section II.
Method for acoustic source localization using DPD for
position determination is given in Section III. Some
331

OTEH2016

ACOUSTICSOURCELOCALIZATIONUSINGADISCRETEPROBABILITYDENSITYMETHODFORPOSITIONDETERMINATION

results are presented in Section IV. Conclusions are given


in Section V.

PMFX ( n ) =

2. DISCRETE PROBABILITY DENSITY


METHOD

( n, m ), n = 1...N

( n, m ), m = 1...M

XY

m =1
N

PMFY ( m ) =

(4)

XY

n =1

The DPD method is a novel technique developed at


Defence R&D Canada for numerically combining data
containing uncertainty such as Electronic Warfare (EW)
sensor measurements [6]. The DPD method is based on
the assumption that the sensor measurement of a given
parameter, in this case for example angle of arrival
(AOA) or time of arrival TOA, can be modeled by some
probability density function (PDF) over the range of
possible values. These PDFs may be represented by
various distributions such as Gaussian distribution or von
Mises PDF. The DPD method combines the probability
density distribution of measurements directly by sampling
each PDF at common intervals and calculating the joint
product over the space. The DPD method is applied to
determine the spatial locations by projecting the
measurement PDF from each sensor onto a common grid
of sample points. This requires that some transform
function exists in order to map the measured parameter
into the 2-dimensional space (2-dimensional grid in X, Y
of the Cartesian coordinate system). The angular
transform function between the sensor location (xi, yi) and
a grid point (x, y) is simply expressed as i(x,y)=arctan((yyi)/(x-xi)). The two-dimensional AOA DPD array can be
represented by:
FXY ( n, m ) = f (i ( X ( n ) , Y ( m ) ) ) ,
n = 1,..., N ; m = 1,..., M

The estimated value is determined from the indices n and


m weighted by PMF. In order to estimate the position
error (error ellipses) of the DPD method, the covariance
matrix should be determined. The estimation of location
error is determined by the variances and covariance of the
joint DPD array about the indices of the estimated emitter
position [6]:
N

X2 =

PMF

( n )( n nT )2 x

n =1
M

=
2
Y

(5)

PMF ( m)( m m ) y
2

m =1

and
N

XY =

XY

( n, m ) ( n nT ) ( m mT )

(6)

n =1 m =1

(1)

where FXY is the AOA DPD array of size NM and f(i) is


Gaussian or von Mises PDF. For multiple DOA
measurements, the joint DPD array is calculated over a
common NM grid [4]:
'
PXY
( n, m ) =

XY

( s, n, m )

(2)
Picture 1. TDOA DPD contour for source at (3200,2658)

s =1

where S is the total number of sensors. The joint DPD


N

array is normalized by C =

'
PXY

The DPD method can be used for acoustic source


localization based on TDOA. In this case TDOA PDF can
be represented by a Gaussian function by:

( n, m ) and can be

n =1 m =1

expressed by:
'
PYX ( n, m ) = 1 PXY
( n, m )
C

f ( dtoa ) =

(3)

exp ( tdoa ) / 2 2
2

(7)

where is the mean value of tdoa, and estimated error is


the standard deviation .

where PXY(n,m) represents the emitter location. The twodimensional determination of the emitter location (xT,yT)
is determined by first taking the Probability Mass
Function (PMF) of PXY(n,m):

332

ACOUSTICSOURCELOCALIZATIONUSINGADISCRETEPROBABILITYDENSITYMETHODFORPOSITIONDETERMINATION

OTEH2016

possible target positions to all sensors are calculated.


Distances are determined based on 3D model of terrain.
After preliminary estimation of target position, it is
possible to focus only on the part of AOI close to the
preliminary estimation of the target position and to
perform DPD method with better spatial resolution in
order to achieve better accuracy. Calculated distances are
important because based on these distances and other
important parameters such as temperature and wind flow,
TOAs are calculated from all possible target positions to
the different sensors. Based on these TOAs we calculate
TDOAs for given sensors setup and grid of possible target
positions.
The next step in the proposed approach is estimation of
TOAs at each sensor in the network. Determination of
TOAs at each sensor can be performed before or after
additional processing of the recorded acoustic signals
using appropriate algorithm for TOA detection.
Additional processing assumes wavelet decomposition of
recorded signals. One useful feature of employing
wavelets in multiresolution analysis is that noise can be
extracted from the baseband signal of interest; a process
referred to as wavelet denoising. Wavelet denoising is
optimal in the sense that noise components are removed
from the signal components regardless of the frequency
content of the signal, which is far more efficient than
conventional filtering methods that retain baseband signal
components and suppress high frequency noise [4].

Picture 2. DPD results for target at (3200,-2658) with


spatial resolution 500m
Results obtained using DPD method based on
measurement of TDOA are presented in the Picture 1. and
Picture 2. It is assumed that there are three sensors and
one target at position (3200, -2658). In Picture 1. are
shown PDFs for three estimated TDOAs. Estimated target
position is at that point that represents intersection of
these three estimated PDFs. At Picture 2. is shown DPD
grid for the same target calculated using (3).

3. SENSOR NETWORK-BASED ACOUSTIC


SOURCE LOCALIZATION

At Picture 5. are shown recorded signals from three


sensors and their appropriate wavelet decompositions. In
this paper, used wavelet basis is the db7 wavelet defined
by Daubechies [7]. TOAs can be determined using raw
data from all sensors after wavelet decomposition.

In order to provide geo-localization of an acoustic source


in the battlefield an approach has been proposed in this
paper. The proposed approach is based on two-step
techniques for geo-localization using sensor network. In
the Picture 3. are shown setups of three sensors for
position determination of a mortar.

If there is possibility to transfer all data from each sensor


to central unit, TOAs can be also determined at central
unit manually or by algorithm for TOA detection.

Picture 3. Setup of sensor for position determination


The block diagram of the proposed approach is shown in
Picture 4. After choosing sensors positions, based on the
3D model of terrain for area of interest (AOI), the first
step is to calculate distances from all possible target
positions to each sensor, and to save that data in the
memory. The number of possible target positions depends
on the 2-dimensional grid in X, Y of the Cartesian
coordinate system used for DPD method. For example, if
the dimension of AOI is 40km 40km (in this AOI are
included also positions of the sensors), depending on the
processing power and accuracy, we can choose diferent
spatial resolutions. For spatial resolution of 100m, there
are 160.000 posible target positions. Distances from all
333

OTEH2016

ACOUSTICSOURCELOCALIZATIONUSINGADISCRETEPROBABILITYDENSITYMETHODFORPOSITIONDETERMINATION

Picture 4. Flow-chart of the proposed approach for


acoustic source geo-localization

Picture 7. TDOA sum of DPD matricies for all sensors,


results obtained with spatial resolution 6.25m
At Picture 8. is shown RMS error of position estimation
for 20 lunch events of mortar for different spatial
resolutions. It can be concluded that better spatial
resolution in the process of source localization using DPD
method provide better accuracy. Based on shown results
also can be concluded that better accuracy is for xcoordinate then for y-coordinate.

Picture 5. Recorded signals from three sensors and their


appropriate wavelet decompositions (db7 wavelet)

4. SIMULATION RESULTS
The results of the proposed method have been validated
using real recorded data gathered during the field
experiment (field experiment has been realized on
September 2015.) hold under the technical panel NATO
STO SET-189 Battlefield Acoustics, Multi-modal
Sensing, and Networked Sensing for ISR Applications.

Accuracy of the system could be improved using


appropriate meteo data during the process of position
determination such as wind speed, wind direction and
temperature.

The sensors setup during the field experiment is shown at


Picture 6. Position of mortar is also shown at Picture 6.
Distances between mortar position target and sensors
were 1571 m, 725 m and 1445 m for sensor 1., sensor 2.
and sensor 3., respectively. TDOA measurement errors
were set at 5 ms. For estimation of Root Mean Square
(RMS) error we use meteo data gathered in the begining
of field experiment.

Using notebook with Intel Core i3-5005U (2.0 GHz, 3MB


L3 Cache) processor with 4 GB DD3 L memory we tested
speed of forming DPD matricies for different spatiall
resolution. In Table.1 are shown estimated time of
processing for different spatial resolution.
Table 1. Estimated time of processing for different spatial
resolution.
Spatial
resolution

500m

250m

200m

100m

Time of
processing

0.46s

1.64s

2.54s

10.69s

5. CONCLUSION
In this paper we describe an approach for acoustic source
localization using a discrete probability density method
for position determination. Proposed approach is based on
estimation of time of arrival of acoustic signal at each
sensor and in the second step applying DPD source
localization is determined.

Picture 6. Sensors setup during the field experiment


For acoustic source localization we selected AOI
dimension of 15 km 15 km, and spatial resolutions were
31.25m, 15.625m, 12.5m and 6.25m. After estimation of
TDOAs sum of DPD matricies are calculated and source
position is estimated. At Picture 7. is presented TDOAs
sum of DPD matricies for all sensors and estimation of
position is calculated with spatial resolution 6.25m for
one lunch event of mortar.

This proposed method does not require high


communication bandwidth for data transfer between
sensors.
The proposed approach showed promising results in the
field experiment tests, but further research is necessary to
improve accuracy using appropriate meteorological
parameters.

334

ACOUSTICSOURCELOCALIZATIONUSINGADISCRETEPROBABILITYDENSITYMETHODFORPOSITIONDETERMINATION

OTEH2016

localization and beamforming, IEEE Signal


Processing Magazine 19.2 (2002): 30-39.
Chen, Joe C., et al.: Coherent acoustic array processing
and localization on wireless sensor networks,
Proceedings of the IEEE 91.8 (2003): 1154-1162.
Sachi Desai; Myron Hohil and Amir Morcos
Classifying launch/impact events of mortar and
artillery rounds utilizing DWT-derived features and
feed forward neural networks, Proc. SPIE6247,
62470R (April 17, 2006); doi:10.1117/12.667877.
Simon, Gyula, and Lszl Sujbert. Acoustic source
localization in sensor networks with low
communication bandwidth, 2006 International
Workshop on Intelligent Solutions in Embedded
Systems. IEEE, 2006.
Elsaesser,D.: Sensor Data Fusion Using a Probability
Density Grid, Proc. 10th International Conference
on Information Fusion, 9-12 July 2007.
Daubechies,,I.: Ten Lectures on Wavelets: SIAM, Capital
City Press,1992.

Picture 8. RMS error of estimation source location

REFERENCES
Chen, Joe C., Ralph E. Hudson, and Kung Yao.
Maximum-likelihood source localization and
unknown sensor location estimation for wideband
signals in the near-field, IEEE transactions on
Signal Processing 50.8 (2002): 1843-1854.
Chen, Joe C., Kung Yao, and Ralph E. Hudson. Source

335

STATISTICAL APPROACH IN DETECTION OF AN ACOUSTIC


BLAST WAVE
MIODRAG VRAAR
Military Technical Institute, Belgrade
IVAN POKRAJAC
Military Technical Institute, Belgrade

Abstract: This paper presents statistical approach in detection of the wave front of an acoustic blast wave in air.
Detection of the wave front is significant in many areas where is necessary to perform location of such sources.
Detection of blast waves, which originate from small explosions, is performed at local distances where acoustic
propagation is confined to the atmospheric boundary layer. Therefore, characteristic signatures of the small explosions
are preserved and a statistical method in process of detection is possible to use.
Keywords: small explosion, detector, signature of small explosion.

1. INTRODUCTION
The topic of this paper is detection of acoustic signals,
which originate from explosions of ammunition different
caliber, using statistical approach. Detection of those
small and/or distant explosions requires sophisticated
measuring equipment and processing algorithms that
enables precisely detection moment when shock or blast
wave approaches to the detector. In the most cases,
synthesis of those detectors is based on a statistical
hypothesis based on a background model of noise, where
the ambient noise is white and stationary. It is obvious
that ambient acoustic noise characteristics are difficult to
determine a priori. Therefore, the null hypotheses are
based on the expected physical characteristics of small
explosions: short duration, impulsive, broadband,
coherency, and all without making any assumptions on
the background noise except that is, exists.
An acoustic blast wave which origin is firing of
ammunition is similar to an N-wave. At relatively small
distances, few thousands meters, the characteristic of blast
wave signal is preserved and can be used as the input of a
detector. Description of an N-wave detector that is based
on edge detection of the steep-sided margins of the N
wave as well as a parametric wavelet model consisting of
cubic spline approximations to a Gaussian function is
given in paper (Sadler et al.) [1].
Acoustical direction finding and tracking systems play
and will play a prominent role on the future battlefield,
where situational awareness will be a key factor affecting
the survivability of light and medium weight forces. The
main advantages of acoustical sensors are low cost, small
size, passive operation, and operational capabilities in
non-line-of-sight (non-LOS) conditions.
At greater distances where waves duct in the tropospheric
jet stream, stratosphere, and thermosphere, multipathing,
dispersion, and other propagation effects serve to remove

this characteristic signature, and detection techniques are


typically based on simple power measurements or on the
coherence of waveforms across an array.
Detection methods exploit the different physical
characteristics of acoustic signals from explosions in
order to identify such signals amongst background noise
(which may be larger in amplitude and contain some of
the same physical characteristics).
Many detectors use a null hypothesis that is based on a
background noise model where is understand that the
noise is white and stationary (e.g., Kay, 1998) [2]. In
reality, acoustic noise characteristics are difficult to
determine a priori and, since we are interested in
detecting a certain type of signal, we define null
hypotheses that are based on the expected physical
characteristics of small explosions: short-duration,
impulsive, broadband, coherent, all without making any
assumptions on the background noise except that is
exists. Each detector is based on the same null hypothesis
of signal plus noise and a probability model is constructed
for each test statistic. Fishers Combined Probability Test
(Fisher, 1958) [3], is then used to combine detections
based on these separate physical properties into a single
detection value.

2. THEORY
According, Sadler et al. [1], blast or shock wave produces
pressure jump, p, at the start wave which depends on
projectile diameter, d, and projectile length, l,

0.53

(1)

is the Mach
where p0 is atmospheric pressure,
number (quotient of the projectile velocity, v, and velocity
of sound in air, c) and x is the perpendicular distance from
the projectile trajectory to the sensor (the nearest point of
336

OTEH2016

STATISTICALAPPROACHINDETECTIONOFANACOUSTICBLASTWAVE

approach, or miss distance). Velocity of sound in air is the


function of few atmospheric parameters. The most
dominant parameters are temperature, atmospheric
pressure, humidity and wind speed and turbulence.
Significant experimental investigation of impulse
propagation is conducted in ISL1, France and spatialtemporal coherence is theoretically studied in ERDC2,
U.S. (Collier S. et al.) [4].

received signal is

, ,

, ,

(1)

The researches commonly assume that the source signal


and the noise are uncorrelated. Then if the noise signals at
the sensors are mutually uncorrelated and have an equal
, the total covariance is the sum
variance,
(2)

Results of their studies show dependency of the coherence


amplitude of acoustic waves on:
turbulence ,
refraction,
impedance ,
sensor separation,
frequency separation and
range.

where, the subscript p refers to the pressure field of the


sound wave of interest and N the noise. The assumption
about ambient noise is that it has a Gaussian distribution
with zero mean. Now by definition

The performance of acoustical sensors does strongly


depend on environmental conditions. Environmental
effect, the scattering of sound, affects the ability of arrays
of acoustical sensors to determine the bearing angles of
targets and its range. Scattering occurs when sound
interacts with turbulence and other random atmospheric
motions, creating random distortions in the propagating of
wave fronts. As the wave fronts propagate from source
(target) to the receiving array, they can accumulate
substantial random variations in their orientation and
intensity, which are perceived as fluctuations in the
apparent bearing angles and strength of the source (D. K.
Willson, 1998) [5]. Recently, researchers directly
incorporate the effects of the wave scattering into
prediction purposes. A modern detection algorithm takes
into account effects of the wave scattering. Accounting
for the strong distortion of the wave front by atmospheric
turbulence is crucial to obtaining accurate performance
characterization.

where <> indicates the ensemble average.

However, for most military tactical scenarios involving


acoustic ground sensors, where the propagation distances
are short enough that the signal is only relatively weakly
scattered, meaning that it has a deterministic mean
component.

3. SIGNAL MODEL
Consider an acoustic array with n sensors. The signal at
each sensor results from:
the wave that has propagated from the source of
interest, with and as the azimuthally and
zenith angles of arrivals (AOAs), and
random ambient noise.
Let p(,,t) and N(t) be the time-varying complex
envelopes of the two contributions, respectively. These
column vectors have n elements, one element
corresponding to each sensor. The source contribution is
time dependent because of the random turbulent effects.
The noise, which is also time dependent, may result from
wind noise or other competing acoustic sources. The total
1

ISL French-German Research Institute of Saint Louis.


ERDC Engineer Research and Development Center U.S.
Army, Vicksburg.

(3)

Commonly, acoustic sensor arrays used in experiment


have sensor spacing similar to the height of the array from
ground. The performance of these arrays is affected by the
large eddies of air thermal local turbulences. In purpose to
analyze this impacts, sometimes is adopted the isotropic,
homogeneous von Krmn turbulence model for air
turbulence, since it is relatively simple and behaves fairly
realistically in the energy-containing sub range. The more
commonly used are Gaussian models, but the von Krmn
model accurately describes the inertial sub range of the
turbulence spectrum.
It is important to point out that the von Krmn form for
the two-dimensional correlation function actually depends
on whether the fluctuations in sound velocity are induced
by a scalar field (temperature or humidity), or a vector
field (wind velocity). The solution for the pressure field
associated with a sound wave propagating through a
moving medium can be obtained as a closed set of fluid
dynamic equations. Few approximations are commonly
applied to this equation set.
I.
The parabolic approximation, valid for smallangle propagation, provides reduction to a single
equation.
II.
The Markov approximation, which assumes that
the turbulence field has vanishing correlation in
the propagation direction, subsequently allows
the statistical moments of the sound-pressure
field to be obtained in a closed form.
These parabolic and Markov approximations are valid in
the far field of the source, at observation points near the
propagation axis.
To describe statistical moments of a broadband pulse
propagating over an impedance layer in a turbulent and
refractive atmosphere with spatial fluctuations in the wind
and temperature fields we use asumtion that acoustic
signal without presence of blast waves is distributed as a
white noise. Therefore, the acoustic amplitudes {xn} form
a sequence of independent Gaussian variables with
variance 2, and constant mean, 0. When blasts exists, the
jumps in the mean occur at unknown time instants.
Let {n} a white noise sequence with variance 2, and let
337

OTEH2016

STATISTICALAPPROACHINDETECTIONOFANACOUSTICBLASTWAVE

{xn} the observations sequence such that:


(4)

In other words, for n fixed, has to be performed the


likelihood ratio test between
: k 0 for k n and H1' : k 1 for k n.

where:
1
(5)
are the means at different time intervals. Detecting a jump
of the mean is equivalent to accept the hypothesis H1 of
change (rn<l) when testing it against the hypothesis H0
of no change mean (n<r and n>l). As the observations are
independent of each other, the likelihood ratio test
between these two hypotheses has the following form:

(6)

where:

0,1

(7)

S ,

(8)

(12)

The algorithm based on the likelihood ratio test between


these two hypotheses is developed and used in detection
of the time of arrival of shock, or blast, wave acoustic
source signal at the acoustic arrays. In addition, algorithm
includes some other properties of the process such as its
duration, specific frequency content and its overall
diversity from acoustic ambient noise.

4. EXPERIMENT
Five acoustic measuring stations of the HEMERA system
were distributed in area of gunfire polygon at mutual
distances of around more hundred meters, see Pic. 1. The
measuring platforms were synchronized, and that enabled
to obtain simulations acoustic signals. Based on such
obtained acoustic signals it is possible to find out time
differences between measuring stations, which belongs to
the same detonation event.

Therefore, its logarithm is:

where
a
is
,
the jump magnitude (here considered with its sign), [6].
The jump time r being unknown is replaced by its
maximum likelihood estimate under H1, namely:

arg max

(9)

As the likelihood under H0 does not depend upon r,


also:
arg max

arg max

is
(10)

Picture 1. Arrangement of the measuring stations of


HEMERA system during experiment.

The detector is thus:

max

(11)

amplitude (V)

0.06
14.7502700108004330
0.04
0.02
0
-0.02

14.68

14.7

14.72

14.74

14.76

14.78
time (s)

14.8

14.82

14.84

14.86

14.88

Picture 2. Acoustic signal of the first HEMERA station and time detection of the arrival of blast wave at the sensor

334

OTEH2016

STATISTICALAPPROACHINDETECTIONOFANACOUSTICBLASTWAVE

0.055

amplitude (V)

15.0576023040921640

0.05

0.045
15

15.02

15.04

15.06

15.08
time (s)

15.1

15.12

15.14

Picture 3. Acoustic signal of the second HEMERA station and time detection of the arrival of blast wave at the sensor

amplitude (A)

-0.046
-0.048
-0.05
-0.052

14.9799591983679360

-0.054
14.96

14.98

15

15.02
time (s)

15.04

15.06

15.08

Picture 4. Acoustic signal of the third HEMERA station and time detection of the arrival of blast wave at the sensor

amplitude (V)

-0.005
-0.01
-0.015
-0.02

15.3403336133445340

-0.025
15.32

15.34

15.36

15.38
time (s)

15.4

15.42

Picture 5. Acoustic signal of the fourth HEMERA station and time detection of the arrival of blast wave at the sensor
0.01

amplitude (V)

0.005
0
-0.005
-0.01
16.1568462738509520

-0.015
-0.02
-0.025

16.12

16.13

16.14

16.15
time (s)

16.16

16.17

16.18

16.19

Picture 6. Acoustic signal of the fifth HEMERA station and time detection of the arrival of blast wave at the sensor

338

OTEH2016

STATISTICALAPPROACHINDETECTIONOFANACOUSTICBLASTWAVE

At the pictures, 2 to 6 are presented acoustic signals and


times of arrival of blast wave front at the detection
system, which originate from the same detonation. It is
obvious that detection was performed just at the
beginning of the shock, or blast, process. According
diagrams, there is not significant delay in detection of
wave front. These results are significant because we can
use them in locating purposes. It is obvious that is
necessary better to understand transmission paths of the
sound and factors that contributes transmission of sound
in air. Particularly, significant impact has temperature,
atmospheric pressure, humidity, wind and existence of
local turbulences.
Existence of local turbulences is noticed when we analyze
shape of the wave blast front, N-wave. For instance, wave
front has regular shape in case of measuring station which
signal is shown in Pic.2. Relatively significant
degeneration of shape is noticeable in Pics. 4 and 6.
Subsequently, future work in resolving turbulence impact
on detection capabilities of the system should be done.

5. CONCLUSIONS
This paper has demonstrated the use of statistical methods
to perform the detection/discrimination of acoustic
signals, which originate from explosions, from a
complicated background that includes real noise and other
types of signal. Detection algorithm based on statistical
approach is realized and results of implementation of this
algorithm are presented in this paper. The detector is
quantified in terms of a null hypothesis that is based on an
assumption that background noise is white, but unknown
a priori. Such an approach enabled to detect explosion, or

precisely, the moment when shock or blast wave front


approaches to detection microphones.
Statistical approach explained in this paper understand
existence significant SNR in acquired signal. Detection of
rising edge of N-wave signal is then more successful.

References
[1] Sadler,B.M., Pham,T., Sadler,L.C.: (1998). Optimal
and wavelet-based shock wave detection and
estimation, J. Acoust. Soc. Am. 104, 955963.
[2] Kay,S.M.: Fundamentals of Statistical Signal
Processing: Detection Theory, Prentice-Hall, Upper
Saddle River, NJ, 560 pp, 1998.
[3] Fisher,R.A. :(1958). Statistical Methods for Research
Workers, 13th ed. (Oliver and Boyd, Edinburgh,
U.K.).
[4] Sandra,L. Collier, D.Keith Wilson: Performance
Bounds on Atmospheric Acoustic Sensor Arrays
Operating in a Turbulent Medium I. Plane Wave
Analysis, Army Research Laboratory, Adelphi, MD,
20783-1197, ARL-TR-2426, February 2002.
[5] Wilson,D.K.: Performance bounds for acoustic

direction-of-arrival arrays operating in


atmospheric turbulence, J. Acoust. Soc. Am.
103, 1306-1319 (1998).
[6] Basseville,M., Benveniste,A.: Detection of Abrupt
Changes in Signals and Dynamical Systems,
Springer-Verlag Berlin Heidelberg New York
Tokyo, 1986.

339

ADAPTIVE TIME VARYING AUTOPILOT DESIGN


NATAA VLAHOVI
PhD studies: School of Electrical Engineering, University of Belgrade,
Work place: Military Technical Institute, Belgrade, natasha.kljajic@yahoo.com
STEVICA GRAOVAC
School of Electrical Engineering, University of Belgrade, Belgrade, graovac@etf.bg.ac.rs
MILO PAVI
Military Technical Institute, Belgrade, cnn@beotel.rs
MILAN IGNJATOVI
Military Technical Institute, Belgrade, milan.ignjatovic@hotmail.rs
Abstract: Missile autopilots represent devices for precise following of a target that missiles are sent to intercept. Commands
that need to be executed are forwarded to the autopilot in order to achieve demanded performance. Usually, in
representation of an autopilot and its performance, a model with constant parameters is used. Constant autopilot
parameters and missile coefficients are calculated from time varying values. Model with constant parameters is shown in
this paper, as well as the results obtained from this model. These values are then used as the initial ones for calculations in
time varying pitch autopilot model design based on gain-scheduling. It is shown how the coupling with other axis such as
yaw is cancelled by using time varying parameters. All the models are implemented in Matlab and Simulink toolbox.
Keywords: Autopilot design, Time varying coefficients, Matlab, Simulink, cross - coupling cancellation, gain-scheduling
design.

1. INTRODUCTION
In this paper, a tactical missile autopilot design is shown. The
purpose of a tactical missile is to intercept targets, and since
tactical missile autopilots are part of the larger system, they
must contribute to that goal. The process by which a missile
executes an intercept is by first sensing the target, and then the
target information is used to generate guidance commands. If
the guidance commands are followed precisely, the missile will
intercept the target. The problem is to follow them with
precision, and this is where the autopilot comes in. The missile
autopilot receives guidance commands and produces control
deflections to move the missile in a manner consistent with
completing the intercept [1].
Missile and its behavior are described with equations of
motion that are nonlinear and time varying. In this paper, a
procedure that it is used is linearization of these equations
for various flight conditions and then autopilot design is
performed with linearized, but time varying parameters.
As general design of an autopilot for many missiles is
performed assuming the independence of roll, pitch and yaw
channels, this simplified model can cause problems in
practice. One of the reasons that can cause problems is
coupling between pitch, yaw and roll axes, which is
nowadays very popular topic [2]. The autopilot design
approach is described through phases, from simple model
with constant coefficients and parameters through time
varying model with one optimal value for the adjustable
parameters. At the end, an adaptive autopilot design is
described, with four time ranges for the adjustable

340

parameters. All design and simulations are realized in


Matlab and Simulink software.

2. AUTOPILOT DESIGN
An autopilot represents a system for automatic control and
stabilization of a moving object. One of possible realizations
of a missile lateral autopilot is normal acceleration control
with accelerometer and gyroscope, which is shown in this
paper [3]. An autopilot is a closed loop system and it is an
inner loop inside the main guidance loop. Not all missile
systems require an autopilot, but for a more precise system
it is a necessity. A missile that carries accelerometers and/or
gyroscopes in order to provide additional feedback into the
missile servos to modify the missile motion has autopilot in
missile control system. Autopilots control the motion in the
pitch and yaw planes - they are called lateral autopilots, or
they control the motion about longitudinal axis roll
autopilots. Autopilots main tasks are: to increase the
system natural frequency, to increase the system natural
damping ratio, to reduce the cross-coupling between axes, to
assist in gathering. [4].

2.1. Constant parameter autopilot design


Autopilot model is based on linearized equations of motion,
and from this model we obtain missile transfer function
aerodynamic parameters for the design procedure. In this
paper, a procedure for the design of a pitch autopilot is
represented. The design procedure is similar for other axes.
Basic parameters of the missile motion in one plane, in this
case vertical plane are shown in Picture 1.

OTEH2016

ADAPTIVETIMEVARYINGAUTOPILOTDESIGN

As it can be seen in Picture 2, flight parameters over time


are almost constant from 7th to 35th second. For constant
parameter autopilot design for each parameter, mean value
is calculated, as well as transfer function with mean value of
parameters.
Aerodynamic transfer functions that are important for
autopilot design are shown in Picture 3, where the model
used in design is represented. The presented model is 3rd
order model, which takes into account that some dynamics
(such as accelerometer, gyro, actuator, structural filter) are
relatively fast, thus not modelled. As it can be seen from a
model, there are three important aerodynamic transfer
functions. The first one is pitch rate transfer function in the
inner loop, and the second one is pitch normal acceleration
transfer function. As for the pitch acceleration transfer
an'
, it
function in outer loop, measured by accelerometer

is taken to be the same as pitch acceleration transfer


an
function
for the simplicity of calculations.

Picture 1. Basic parameters of a missile motion


Using the small perturbation theory, the transfer functions
are developed. The pitch rate transfer function is:
(1)
Where parameters are calculated according to equations:

So, next aerodynamic transfer function that is important in


an
autopilot design is acceleration transfer function
:

(2)
(3)

(6)

(4)

Parameters are calculated according to equations:


(7)

(5)

(8)
(9)

(10)

(11)

(12)

(13)

Picture 2. Missile flight parameter change over time

(14)

First step represents plotting the parameters that are used in


transfer function in time, in order to see how these time
variable parameters change in time. Aerodynamic
parameters are calculated for tactical anti-tank missile with
almost constant velocity and aerodynamic parameters. In
Picture 2 all parameters needed for autopilots parameters
calculations are shown, as well as V missile velocity
change over time for comparison.

For this transfer function calculation it is also necessary to


see over time change of the parameters. After plotting the
parameters over time, the results show that also these
parameters are nearly constant in time range from 7th to
35th second. Thus, mean value of every parameter is
calculated in order to calculate transfer function.

341

OTEH2016

ADAPTIVETIMEVARYINGAUTOPILOTDESIGN

Since autopilot represents system for automatic control and


motion stabilization of a guided object, there are few design
solutions. One of possible realizations is normal

acceleration control system with accelerometer and rate


gyroscope. In order to simplify the process for the initial
calculations, 3rd order model is used (Picture 3).

Picture 3. Autopilot design - 3rd order model


All other necessary parameters for autopilot design are
given below:

Autopilot Constant Parameters

11
10
9

(15)

Am plitude[V ]

(16)

6
5
4
3
2

(17)

1
0
0

0.5

1.5

2.5

t[s]

(18)

Picture 4. Constant parameters autopilot step response


In this case, as well as for other plots with autopilot
response, the observed response represents normal
acceleration response, measured with simulated
accelerometer.

(19)

(20)

2.2. Time varying autopilot design


Since constant parameter model shows some basic behavior
of the system, but it is not accurate enough, next design step
is time varying autopilot design, with transfer functions
parameters that are dynamically changed over time. In this
section, a procedure for time varying autopilot design in
Matlab and Simulink is described. The desired transfer
functions have to be represented in canonic form, in order to
accomplish time varying design in Simulink. Detailed
procedure for canonic form calculation is described in [6].
In Picture 5 is shown transfer function on the left side, and
its canonic representation on the right side.
With transfer functions transformed into canonic
representation, it is now possible to plot the response with
dynamic parameters that vary in time, which is certainly
more accurate systems model. In this step of autopilot

(21)
Parameters Ki, Kr, Kac represent adjustable parameters that
need to be adjusted in order to have desired system
response, defined by desired values of e, e, e parameters
[5].
After parameter calculation, next step is Simulink model
design, based on 3rd order model with mean value of
transfer functions parameters. This step is important for the
autopilot designer in order to observe basic system
behavior, so that more complex model can be designed
more easily, as well as to have some initial knowledge about
the system. System step response is shown in Picture 4.

342

OTEH2016

TITLEOFTHEPAPERINENGLISH

Picture 5. Canonic representation of transfer function


Adaptive autopilot response
design, adjustable parameters have only one value during
6
the time interval, and it is set to the value that insures the
best possible system response. For the pulse input signal,
5
system response is shown in Picture 6.
4

Time varying autopilot design pulse response


Amplitude[V]

4
Amplitude [V]

3
0

2
-1
0

10

15

25

30

35

40

45

Picture 7. System response gain scheduling autopilot

-1
0

20
t[s]

10

15

20
25
Time[s]

30

35

40

45

Picture 6. System response time varying autopilot design


From the time varying autopilot design with one value for the
adjustable parameters response plot it can be seen that for the
time interval with nearly constant flight parameters the
response is above the ideal stationary state value, but not much.
It is also visible that for the first and last part of the flight,
where flight parameters change significantly, and the response
is very unstable with visible oscillations, which was expected,
but it is not desirable behavior of a system. In next step, the
idea is to set more values for the adjustable parameters in order
to improve the performance of the system.

2.3. Adaptive gain scheduling autopilot design

In Table 1 desired performance values for different time


ranges are shown. For these desired values, during the
operating time, adjustable parameter values are calculated in
order to be used in different operating ranges. Since
unstable periods with oscillations in response are present in
periods with the most significant changes in dynamic
missile parameters, which are in the beginning of a flight
and at the end, while the part with almost constant
parameters is from around 7th to 35th second, values for
adaptive parameters are defined accordingly. That is why
there is only one parameter value for the whole constant part
of flight path, and more values for the dynamic part in the
first part of the flight path where parameters should follow
the change in dynamic missile coefficients.
Table 1. Adjustable parameters values
Period

In this chapter, an adaptive autopilot design is explained.


For each missile flight phase, different values of adjustable
parameters are set in a way that insures best possible
response. Since the same input signal is used, the results are
compared to the results obtained with autopilot design from
the previous step that has one value for each parameter
(Picture 8).
Autopilots gain scheduled parameters are derived in terms
of functions of the designed system parameters with the
objective of the control to achieve tracking of a commanded
acceleration with stability and precision. So, the goal is to
obtain the best possible commanded acceleration tracking.

I
II
III
IV
V

Time range [s]


V value
[0,3]
(3,6]
(6,7]
(7,35]
(35,41]

Parameters values
e = 0.0105, e = 85, e = 0.7
e = 0.0069, e =120, e =0.7
e = 0.0105, e = 85, e = 0.7
e = 0.0073, e =118, e =0.7
e = 0.008, e = 115, e = 0.7

As we can see from the response plot, now for stationary


state the result is better. In first and last part of time interval,
where flight parameters are very changeable, the response is
more stable than in the case of just one value for adjustable
gains.

343

OTEH2016

TITLEOFTHEPAPERINENGLISH

results can be obtained with adaptive autopilot design, since


there is value for this segment that ensures desired system
behavior, and another value that ensures desired system
behavior for other segments. So, with the adaptive design of
this type, accomplishment is that response is more stable at
the first time interval. Further on, stationary value is more
precise in the constant flight parameter interval, and, in
parts with very big parameter changes, there are no so
intense oscillations, as was the case with previous nonadaptive design.

3. SIMULATION AND RESULTS


In last few sections, an approach for gain scheduling
autopilot design was described. Now, the results that are
obtained from non-adaptive and adaptive design will be
compared, in the case of a present disturbance.
Since missile autopilots also perform a task of reducing
cross-coupling between axes, an influence of a disturbance
of this type is examined. The autopilot design is described
for a missile with slow rotation, and in the observed channel
pitch channel, exists a parasite signal, command from
other channel - yaw channel. Since the realized design is for
pitch autopilot, now yaw autopilot (which is the same as
pitch autopilot) signal will represent the disturbance coupled
on pitch signal. Obviously, this yaw signal that represents
the disturbance is 0.1 or 0.05 of a real yaw signal value,
since it is only parasite coupling, and it is not of a intensity
of a real signal.
The results obtained from time varying autopilot with one
value for adjustable parameters are shown (Picture 8).

4. CONCLUSION
In this paper, an approach to autopilot design is described. It
begins with simple model design - constant parameter
design with transfer function coefficients that are constant in
time. This is first step in design, which is important for the
designer to get familiarized with the system. Next described
step is more complex model, time varying autopilot design
with dynamic coefficients that change over time, therefore
all transfer functions need to be represented accordingly.
One value for adjustable parameters is calculated, and that
value should be optimal for entire time interval. The last
step is adaptive autopilot design with different values for the
adjustable parameters in for each flight phase. It is shown
that adaptive autopilot design gives better performance in all
intervals, and stabilizes the moving object in phases with
drastic parameter changes. In a case where disturbance of a
cross-coupling type is present, different values are used for
adjustable coefficients, and the results would be even better
with more different values, which will certainly be topic to
consider in future work. Further on, it is not always possible
to find optimal solution when using linearized model
parameters, especially when dealing with highly
maneuverable missiles. On the other hand there may be
modelling errors and parameter uncertainties. In these cases,
nonlinear control design approaches can be really helpful, as
described in [7]. Therefore, it can be a topic for future work
and autopilot design robustness improvement.

Time varying autopilot response


8

Amplitude [V]

-2

-4

10

15

20

25

30

35

40

45

t[s]

Picture 8. System response time varying autopilot design


with cross-coupling type of disturbance
Afterward, the results obtained from adaptive autopilot
design with three different parameters for different flight
phases are shown (Picture 9).

References
[1] Baillieul,J., Samad,T.: Encyclopedia of Systems and
Control, DOI 10.1007/978-1-4471-5058-9, SpringerVerlag London 2015.
[2] Mohammadi,M.R., Jegarkandi,M.F., Moarrefianpour,A.:
Robust roll autopilot design to reduce couplings of a
tactical missile, Aerospace Science and Technology
51(2016) 142-150, 2016.
[3] Graovac,S.: Automatic Guidance of Objects in Space,
Akademska misao, Belgrade, 2006
[4] Garnell,P., East,D.J.: Guided Weapon Control Systems,
Royal Military College of Science, Swindon, England,
1977
[5] uk,D., urin,M., Mandi,S.: Autopilot Design,
Theoretical Manual, MTI, Belgrade, 2004
[6] urovi,., Kovaevi,B. :Automatic Control Systems,
Akademska misao, Belgrade, 2006.
[7] Kim,S.H., Kim,Y.S., Song,C.: A robust adaptive
nonlinear control approach to missile autopilot design,
Control Engineering Practice 12(2004) 149/154, doi:
10.1016/S0967/0661(03)00016-9, 2004.

Adaptive autopilot response


10

Amplitude [V]

-2

-4
0

10

15

20

25

30

35

40

45

t[s]

Picture 9. System response time varying autopilot design


As it can be seen from Picture 8 and Picture 9, in the first
part of flight, where parameters vary significantly better

344

MATHEMATICAL MODEL FOR PARAMETER ANALYSIS OF


PASSIVELY Q-SWITCHED ND:YAG LASERS
MIRJANA NIKOLI
Military Technical Institute, Belgrade, mystastya@sezampro.rs
ELJKO VUKOBRAT
Military Technical Institute, Belgrade

Abstract: For the passively Q-switched solid state laser, the initial population density in the gain medium does not depend
on the pump rate as in case of active Q-switching. The initial population density in the gain medium for passive Q-switching
is determined by the initial transmission of the saturable absorber and the reflectivity of the output mirror. The numerical
simulation based on ESA (excited state absorption) effect was performed. From the wide variety of passively Q-switched
lasers general theories, we choosed a second-threshold approach, numerical fitting procedure and optimization method, in
order to determine the output pulse energy as function of the initial transmission of the saturable absorber and the
reflectivity of the output coupler. The example of modeling of solid state flashlamp pumped Nd:YAG laser with Cr4+:YAG as
a saturable absorber is explained and examined to illustrate the use of the present model. The present mathematical model
provides a straightforward procedure for the design of passively Q-switched solid state lasers.
Keywords: solid state lasers, Nd:YAG lasers, passive Q-switching, saturable absorbers, excited state absorption effect.
very short that we can neglect the influence of the optical
pumping and spontaneous emission.

1. INTRODUCTION
In the laser technique, a laser operation mode used for the
generation of high pulse power, is known as Q-switching
[1]. This effect is so named because the optical Q factor of
the resonant cavity is altered when this technique is
employed. The quality factor or Q factor of resonator is
defined as the ratio of the energy stored in the resonator
cavity to the energy loss per cycle. This means, the higher
the quality factor, the lower the losses.
In the laser resonator, the energy is accumulated by optical
pumping of active amplifying laser medium. Although the
energy is stored in the medium and the gain in the active
medium are high, the cavity losses are also high, the lasing
action is prohibited, and the population inversion reaches a
level far above the threshold for normal lasing action. When
a high cavity Q is restored again with Q-switching process,
the stored energy is discharged in an extremely short time
by powerful pulse.
The Picture 1 presents time sequence of the Q-switched
pulse generation in solid state laser pumped with power
flashlamp. The flashlamp output, resonator losses, inversion
density and photon flux are shown in the picture as a
function of time. The picture shows that the lasing action is
disabled in the cavity by a low Q factor of the cavity.
Toward the end of the flashlamp pulse, when the inversion
has reached its peak value, the Q factor of the resonator is
switched to some high value (the losses are small). At this
point a photon flux starts to build up in the cavity, and a Qswitch pulse is emitted after an appreciable delay time.

Picture 1. Time sequence of the Q-switched pulse


generation.
In order to forming power pulse by the Q-switching
technique, is necessary to fulfill two conditions [1]:
The speed of optical pumping of active laser gain medium
has to be faster than depopulation speed by spontaneous
emission from upper laser level,

The very important approximation which simplifies the


mathematical model of Q-switching process in the Qswitching theory, is that the Q-switched pulse duration is
345

MATHEMATICALMODELFORPARAMETERANALYSISOFPASSIVELYQSWITCHEDNd:YAGLASERS

Q-switching process hast to be faster than the growth of


the stimulated emission.
The modules used in Q-switching techniques can be:
Mechanical,
Electro-optical,
Acusto-optical,
Passive.

OTEH2016

Q-switch, the ground state absorption cross section has to be


large and, simultaneously, the upper state lifetime (level 2)
has to be long enough to enable considerable depletion of
the ground state by the laser radiation. When the absorber is
inserted into the laser cavity, it will look opaque to the laser
radiation until the photon flux is large enough to depopulate
the ground level. If the upper state is sufficiently populated
the absorber becomes transparent to the laser radiation.

The Q-switched solid state lasers are used widely in military


applications as laser rangefinders, lidars, for pollution
detection and for medical purposes also [2].

2. PASSIVE Q-SWITCHING
The passive Q-switch module consists of an optical element,
such as a cell filled with organic dye or a doped crystal. The
material becomes more transparent as the optical flux
increases, and at high optical levels the material saturates
or bleaches, resulting in a high transmission [1]. The
bleaching process in a saturable absorber is based on
saturation of a spectral transition. If such a material with
high absorption at the laser wavelength is placed inside the
laser resonator, it will initially prevent laser oscillation. As
the gain increases during a pump pulse and exceeds the
round-trip losses, the intracavity flux increases dramatically
causing the passive Q-switch to saturate. Under this
condition the losses are low, and a Q-switch pulse builds up.

Picture 2. The energy levels of saturable absorber.


The most important parameters of the saturable absorbers
are: spectral band in which absorption occurs, dynamic
response (recovering speed), saturation intensity and fluence
(intensity or energy which drives to the saturation) [2].

Since the passive Q-switch is switched by the laser radiation


itself, it requires no high voltage or fast electro-optic driver.
The passive Q-switch offers the advantages over than active
method of an exceptional simple design, which leads to very
small, robust, and low-cost systems. The major drawbacks
of a passive Q-switch are the lack of a precision external
trigger capability and a lower output compared to electrooptic or acousto-optic Q-switched lasers. The latter is due to
the residual absorption of the saturated passive Q-switch
which represents a rather high insertion loss.

For the modeling of laser with passive Q-switch, we can


introduce the following approximations [3]:
Optical pumping of the active gain medium is uniformly,
Optical intensity in resonator is axially uniform,
Complete recovering of the saturable absorber during one
pulse duration or for low pulse rate.
In the actively Q-switched laser, ni is the initial population
inversion density in the gain medium and it is proportional
to the pump rate. For the passively Q-switched laser, ni does
not dependent on the pump rate, but it is determined by the
initial transmission of the saturable absorber (0) and the
reflectivity of the resonator output mirror (R). Therefore, we
shall model the output pulse energy with parameters 0 and
R, following the method described in [3].

Originally, saturable absorbers were based on different


organic dyes, either dissolved in an organic solution or
impregnated in thin films of cellulose acetate. The poor
durability of dye-cell Q-switches, caused by the degradation
of the light sensitive organic dye, and the low thermal limits
of plastic materials severely restricted the applications of
passive Q-switches in the past. The emergence of crystals
doped with absorbing ions have greatly improved the
durability and reliability of passive Q-switches.

We have defined the two important parameters:


the upper limit of 0, which has influence on the regularly
Q-switching behavior for a given R, gain medium and a
given saturable absorber. With this parameter, the output
pulse energy was explicitly fitted as an analytical function
of 0 and R.
the lower limit of R, which can result in normal Qswitching behavior for a given 0, gain medium and a
given saturable absorber. With this parameter, the
optimum output reflectivity for maximizing the output
pulse energy was successfully fitted as an analytical
function of 0.

Today, the most common material employed as a passive Qswitch is Cr4+:YAG. The Cr4+ ions provide the high
absorption cross section of the laser wavelength and the
YAG crystal provides the desirable chemical, thermal, and
mechanical properties required for long life.
A material exhibiting saturable absorption can be
represented by a simple energy level scheme such as that
shown in Picture 2.

In this paper, in order to illustrate the use of presented


model, the example of the solid state laser with passive
saturable absorber is examined. The active gain medium is
Nd:YAG rod (Picture 3) and passive saturable absorber is
Cr4+:YAG crystal (Picture 4).

Picture 2 shows the energy levels of a saturable absorber


with excited state absorption, where gs and es are the
ground state and excited state absorption, respectively, and
is the excited state lifetime. Absorption at the wavelength of
interest occurs at the 1-3 transitions. We assume that the 3-2
transition is fast. For a material to be suitable as a passive
346

OTEH2016

MATHEMATICALMODELFORPARAMETERANALYSISOFPASSIVELYQSWITCHEDNd:YAGLASERS

Picture 3. The typical Nd:YAG rods

dn = cn
dt

(2)

dngs
= A c gs ngs
dt
As

(3)

ngs + nes = ns 0

(4)

where

- photon density in resonator with respect to the


effective cross section of the laser beam area in
the gain medium,
n
- population inversion density of the gain
medium,
ls
- length of the saturable absorber,
l
- length of the active gain medium,
- the cross section for stimulated emission in the

gain medium,
A/As
- the ratio of the effective area in the gain
medium and in the saturable absorber,
respectively,
ngs
- population density of the absorber ground state,
nes
- population density of the absorber excited state,
ns0
- total population density in saturable absorber,
gs and - are the cross section for absorption in the
ground and in the excited state of the saturable
es
absorber, respectively,
R
- the output mirror reflectivity,
L
- the nonsaturable dissipative optical losses in
resonator,
- inversion reduction factor (1 for four-level

laser system and 2 for three-level laser system),


tr = 2l c - round trip time of flight in resonator with
optical length l, where c is the light speed in
vacuum.

Picture 4. The typical Cr4+:YAG saturable absorbers

3. MATHEMATICAL MODEL OF THE


PASSIVE Q-SWITCHING
In the theory of passive Q-switching mathematical modeling
there are many approaches [4-6]. In order to set algorithm
for modeling, we need to known the physical processes in
saturable absorber material.
In Picture 5 is shown a scheme of laser resonator with basic
elements and passive Q-switch, were is 1 total mirror, 2
passive saturable absorber with length ls, 3 cylindrical rod
with length l and cross section diameter dl as laser active
gain medium, 4 output mirror with defined reflectivity and
l length of resonator.

Following the detailed explanations given in [3] and using


appropriate approximations and simplifications given in [1,
3-5, 7, 8] for mathematical describing of ESA effect, we can
get relations for upper limit of T0 and lower limit of R, for
producing a giant pulse respectively:

( )

(T0 )up lim =

Picture 5. The basic elements of laser resonator.


In this paper we shall shortly describe mathematical model
of ESA (excited state absorption) effect, and follow the
approach developed in [3] to derive the output pulse energy
for the Gaussian beam distribution with the ESA effect.

(R )low lim = e

ln 1 + L

R
2( (1 ) 1)

1
( (1 )1) ln 2
T

(5)

+ L

(6)

where and are the following constants:

3.1. Mathematical model of ESA effect

= 1 gs A , = es
As
gs

The coupled rate equations have been used to model a


passively Q-switched laser in many investigations [5-8].
Chen and coworkers in [3], extend previous obtained results
in [4] by including the influence of intracavity focusing and
the ESA effect. The coupled equations for three or four level
gain media are modified as follows [3]:

(7)

Further, the parameters (T0)uplim and (R)lowlim can be used to


express the output pulse energy and optimum output
reflectivity as an analytical function, respectively.
The parameters of the most of the solid state laser with
passive Q-switching are >2 and <0,7, so than we can use
method of mathematical model numerical fitting based on
on the huge base of the experimental data [3].

( ( ) )

d = 2 nl 2 n l 2 n l ln 1 + L (1)
gs gs s
es es s
dt r
R

347

MATHEMATICALMODELFORPARAMETERANALYSISOFPASSIVELYQSWITCHEDNd:YAGLASERS

The energy extracting from the gain medium of a passively


Q-switched laser includes three parts. Some of the energy is
lost in bleaching of the saturable absorber, some is lost due
to intracavity losses or ESA effect, and another part leaves
the cavity as the output energy. With performed
transformation suggested in [3] we obtain for the output
pulse energy:

4. NUMERICAL EXAMPLE
As the numerical example we perform modeling of solid
state Nd:YAG laser with passive Cr4+:YAG saturable
absorber in the configuration of plan parallel resonator.
For calculations performing, we used the following
constants [1, 3, 6, 8]:

c = 3108 m/s the light speed in vacuum,


h = 6,62 10-34 Js Planck constant,
= 1 inversion reduction factor,
= 2,810-19 cm2 the cross section for the stimulated
emission in the gain medium,
ns0 = 2,71018 cm-3 total population density in saturable
absorber,
gs = 8,710-19 cm2 and es = 2,210-19 cm2 are the cross
section for absorption in the ground and in the excited
state of saturable absorber, respectively.

for T0<T0uplim

( )

E = h A ln 1
R
2


(1 ) ln 12
T

1
1
ln 2 + ln
+L
R
T0
T0
1
f ( , )
T0up lim

( )

(8)

and for T0T0uplim the output energy is E=0, where


1 + 3 e 50 3
(1 + )
+ 0, 08 1 e

1 + L

1
ln R

Further, for the modeling we assumed the typical values for


the next parameters:
= 1,064 m radiation wavelength of Nd:YAG laser,
L=0,05 dissipative losses in laser,
l = 50 mm the length of the laser rod,
dl = 3 mm the laser rod diameter,
rl = dl /2 the laser rod radius,
A/As=1,
l=8 cm the length of the laser resonator,
ls = 3 mm the length of the saturable absorber.

(9)

( )

f ( , ) = 1,15 0, 2e5 0,9 2

(10)

(1 )
0,15 + 0,9
e
+
(150 3 )
1
e

The simulated results are obtained in convenient


mathematical simulation module MATLAB.
Replacing the values of previous constants and parameters
in (7), we obtain following values: for parameter =3,1071,
for parameter =0,253. The calculated value for the factor
hA/2 is about 24 mJ.

We used the more convenient expression for weighted


(normalized) output pulse energy:

Enorm = E [hA 2 ]

(11)

Now we can obtain output pulse energy using relations from


(8) to (11).

As in [3], we used here the following function to express the


optimal output reflectivity:

Ropt = [Rlow lim ]

m ( , )

The Picture 6 shows the calculated Enorm from (11) in


function of T0 for three typical values of R, the Picture 7
shows the family of the calculated Enorm from (11) in
function of T0 and R and the Picture 8 shows calculated
values of Ropt for several values of T0.

(12)

where
m ( , ) =

( 2,85 )( 1,1)

( + 1)
1,5 1,3
2 ( 2 )0,5

e0,83

+ 5e

OTEH2016

(13)

Relation (8) shows that the output pulse energy is


proportional to the losses of the saturable absorber
(1)ln(1/T02) and inversely proportional to the total
resonator losses ln(1/T02)+ln(1/R)+L, in the situation that
the saturable absorber saturates. From (12) it is obvious that
the optimal output reflectivity is larger than Rlowlim. The
factor m(,) is smaller than unity and it is found by the
numerical analysis [3].
To illustrate the utility of the present model we performed
numerically in following text an example of modeling solid
state Nd:YAG laser with passive Cr4+:YAG saturable
absorber.

Picture 6. Calculated Enorm in function of T0 for three


typical values of R

348

MATHEMATICALMODELFORPARAMETERANALYSISOFPASSIVELYQSWITCHEDNd:YAGLASERS

From Picture 6 for R=20% it is obtained T0uplim=54%, for


R=50% it is obtained T0uplim=76% and for R=80% it is
obtained T0uplim=90%, respectively.

OTEH2016

5. CONCLUSION
In this paper we used the theory of the ESA (excited state
absorption) effect, the algorithm of the numerical fitting
procedure and optimization method, in order to determine
the output pulse energy of the passive Q-switched solid state
laser, as function of the initial transmission of the saturable
absorber and the reflectivity of the output coupler (mirror).
We have proved numerically in the case of Nd:YAG laser
with passive Q-switch saturable absorber, that the
combination of appropriate Ropt and T0 enables the modeling
of optimized output pulse energy. The numerical fitting
procedure performed in [3], is based on fitting of the huge
amounts of experimental data. In the lack of the enormous
base of the experimental data, we used in this paper, as the
first step for output pulse energy modeling, the experience
of the other authors in the numerical fitting and established
our framework for future modeling. We plain to establish in
the future our methods and experiments in the
Optoelectronic Laboratory, by the combination setting of
the passive saturable absorbers with the variable initial
transmission coefficient and output mirrors with the variable
reflection coefficient. After obtaining the experimental
results, we should compare results obtained from numerical
calculation and experimental data in order to finding degree
of agreement.

Picture 7. Calculated Enorm in function of R for variable


T0 (from 10% to 90%). The maximum of the output
energy is denoted with the circle

References
[1] Koechner,W., Bass,M.: Solid state lasers: a graduate
text, Springer-Verlag, New York, 2003.
[2] www.time-bandwidth.com/technology.
[3] Chen,Y.F., Lan,Y.P., Chang,H.L.: 'Analytical model for
design criteria of passively Q-switched lasers', IEEE
Journal of Quantum Electronics, vol.37, no.3, march
2001, pp. 462-468.
[4] Degnan,J.: 'Optimization of passively Q-switched
lasers', IEEE Journal of Quantum Electronics, vol.31,
no.11, November 1995, pp. 1890-1901.
[5] Degnan,J.: 'Theory of the optimally coupled Q-switched
laser', IEEE Journal of Quantum Electronics, vol.25,
no.2, February 1989, pp. 214-220.
[6] Xiao,G., Bass,M.: A generalized model for passively Qswitched lasers including eccited state absorption in
the saturable absorber, IEEE Journal of Quantum
Electronics, vol.33, no.1, january 1997, pp. 41-44.
[7] Zhang,X., Zhao,S., Wang,Q., Zhang,Q., Sun,L.,
Zhang,S.: Optimization of Cr4+-doped saturableabsorber Q-switched lasers, IEEE Journal of Quantum
Electronics, vol.33, no.12, december 1997, pp. 22862294.
[8] Zhang,X., Zhao,S., Wang,Q., Liu,Y., Wang,J.,
Optimization of Dye Q-switched lasers', IEEE Journal
of Quantum Electronics, vol.30, no.4, april 1994, pp.
905-908.

Picture 8. Calculated values of Ropt for variable T0


In Table 1 we have summarized the results obtained from
the Picture 7 and 8.
tTable 1. Calculated values of the output pulse energy for
Ropt and T0
Ropt (%)
T0 (%)
Enorm
E (mJ)
12
10
1,89
45
23
20
1,31
31
33
30
0,97
23
44
40
0,73
17
54
50
0,54
13
63
60
0,39
9,4
73
70
0,26
6,2
83
80
0,15
3,6
92
90
0,05
1,2

349

HFSW RADAR DESIGN: TACTICAL, TECHNOLOGICAL AND


ENVIRONMENTAL CHALLENGES
DEJAN NIKOLI
Institute VLATACOM, Belgrade, Serbia, dejan.nikolic@vlatacom.com
BOJAN DOLI
Institute VLATACOM, Belgrade, Serbia, bojan.dzolic@vlatacom.com
NIKOLA TOSI
Institute VLATACOM, Belgrade, Serbia, nikola.tosic@vlatacom.com
NIKOLA LEKI
Institute VLATACOM, Belgrade, Serbia, nikola.lekic@vlatacom.com
VLADIMIR D. ORLI
Institute VLATACOM, Belgrade, Serbia, vladimir.orlic@vlatacom.com
BRANISLAV M. TODOROVI
Institute VLATACOM, Belgrade, Serbia, branislav.todorovi@vlatacom.com

Abstract: With maximal range of about 200 nautical miles, as along with ship detection and oceanographic monitoring
functionalities, HFSW radars provide unique capability of complete Exclusive Economic Zone monitoring. Uniqueness
of HFSW propagation, which follows Earths curvature, introduces various challenges accordingly, thus making HFSW
radar design a very challenging task. The most important factors such as electrical properties of the water and the
height of waves on the sea/ocean, levels of natural and man-made noise, interference, as well as sea clutter, must be
carefully considered during the design and development processes of HFSW radar. More over, tactical demands
represent new challenges, which especially influence installation and operation of HFSW radar.
Keywords: Radar, HF radar, OTH radar, HFSW, vessel detection, marine systems.
Although primarily developed for oceanographic
research, HFSW radars, nowadays, find increasing
application in EEZ monitoring. By combining several
HFSW radars, a radar network which provides monitoring
and control of large surface area is realized. These radar
networks show great potential for EEZ monitoring. In
addition to HFSW radars, in HF frequency band, a radars
that use reflection of electromagnetic waves from the
ionosphere also may be found. These radars are usually
referred as High Frequency Over The Horizon
Backscatter (HF-OTH-B) radars. The range of HF-OTHB radars is several thousand kilometers, but their blind
zone stretches over few hundred kilometers, which makes
them unsuitable for EEZ monitoring. This type of radars
will not be a subject of this paper.

1. INTRODUCTION
Nowadays it becomes clear that control of territorial
waters is not enough to ensure secure flow of goods from
Exclusive Economic Zone (EEZ). EEZ is a zone stretched
200 nmi (approx. 370 km) from territorial waters in
direction of an open sea in which countries have exclusive
rights such as exploitation of biological and mineral
resources of the sea. Incising organized crime which is
done a way from territorial waters, makes control of
whole EEZ a must for every marine nation, not a privilege
of a few wealthy and economically developed countries.
To the best of our knowledge, there are only two ways to
achieve complete EEZ monitoring. First approach utilizes
optical and microwave sensors on platforms such as
satellites and airplanes, thus avoiding sensors limitations.
Other approach uses network of high frequency surface
wave (HFSW) radars [1], [2] to ensure constant
surveillance well beyond horizon. Since the price of
HFSW radar network is significantly lesser than
combined cost of aforementioned sensors and their
platforms, it is clear why this radars slowly become a
sensor of choice for maritime surveillance at OTH
distances.

The paper examines the most important questions that


have been encountered in the design, development,
installation and exploitation of HFSW radar. In addition,
this paper also considers tactical demands which end
users usually have regarding systems overall
performance.
The rest of the paper is organized as follows: first we
describe tactical situation we are addressing and explain
350

HFSWRADARDESIGN:TACTICAL,TECHNOLOGICALANDENVIRONMENTALCHALLENGES

demands which end users usually have in Section 2.


Environmental challenges are described in Section 3.
Section 4 is dedicated to the technical challenges and
ways to solve them. HFSW radar networks are discussed
in Section 5 and we give conclusion in Section 6.

state, wind direction, the direction of movement of the


waves, targets RCS and levels of atmospheric, cosmic
and man-made noise [2], [3]. All of the aforementioned
factors are interconnected and sometimes mutually
dependent. In this chapter some of the most important
environmental factors will be discussed.

2. TACTICAL SITUATION AND DEMANDS

Firstly, electrical characteristics of the propagation


surface will be briefly discussed. The lower the
conductivity of the surface is, greater the losses are. Salty
sea water compared to dry land has better conductivity
and thus lesser losses. The water salinity level determines
the electrical characteristics of the water [4, 5]. This
implies that HFSW radars can only operate at the coastal
areas and monitor large bodies of water. Moreover with
increase of frequency, propagations losses arise as well.

Since the monitoring of EEZ is the targeted application, it


is clear that vessels of interest are used for transportation
of goods. Those vessels include various types of cargo
vessels, tankers and crude oil tankers. Apart from civilian
vessels, larger military vessels are also targets of interest.
For example, gun boat is small vessel and it could hardly
sail at the open sea, on the other hand cruiser is a large
vessel and its presence at the open sea is quite common.
All targets of interest have following common
characteristics:
- Most of the vessels are very large and their length is
more than 100 meters, while the length of some
vessels could exceed 300 meters.
- A displacement of the vessels is usually more than
50000 DWTs, while some vessels could carry even
more than 300000 DWTs of goods. (Although
military vessels have smaller displacement, their size
is comparable to the size of commercial vessels
which displacement is few times greater. For
example, Kirov class battle cruisers have
displacement of 24 000 DWTs and length over 250
meters, while Aframax class of commercial vessels
have displacement from 75 to 115 000 DWTs and
length lesser than 300 meters.)
- Except for some military vessels, the top speed of
the vessels seldom exceeds 25 knots. A usual
cruising speed is from 15 to 20 knots.
-

OTEH2016

Next, in the HF band, radar range is primarily affected by


the level of external noise. External noise consists of
atmospheric, cosmic or artificial noises.
Atmospheric noise predominantly depends on the
geographic location and season: winter, spring, summer
or autumn, while cosmic noise depends on time of day /
night. From HF point of view, there are 6 periods during
the 24 hours: 00h -04 h, 04 h -08 h, 08 h -12 h, 12 h -16 h, 16 h
-20 h, and 20 h -24 h. Artificial (man made) noise varies
by region (rural zones, sub urban or urban zone).
Detailed description of Atmospheric and cosmic noise
could be found in [6], while man made noise mostly
depends on economical development of the area around
radar site. In general, the level of the external noise is
greater for lower frequencies.
Additional losses in the propagation of HF surface waves
are influenced by ripples at propagation surface. Other
words, propagation losses are dependent of roughness of
the sea surface. Roughness of a sea surface, also known as
Sea State, is most commonly described with Douglas
scale. By this scale Sea state is expressed with digits from
0 to 9. A higher number on the scale corresponds to a
higher wave height, which leads to higher losses in the
propagation. Analysis of sea states from 0 to 6 shows that
increase of wave height is proportional to increase in the
propagation loss. Detailed analyze of this phenomenon
could be found in [7].

Above being said, it is obvious that these vessels are


not able to maneuver sharply. Moreover, they are
traveling along a straight line most of the time and
when they make turns, they are doing it slowly
within wide arcs.

Demands placed before HFSW radar designers are


following:
- Achieve radar coverage of area as large as possible.
Ideally HFSW range should be around 200 nmi.
- Detect and track all targets of interest at the open sea
during all weather conditions.
- Minimize False Alarms, while maintaining good
detection capabilities and long track life time.

Reflections from sea waves represent clutter to HFSW


radar dedicated to vessel traffic surveillance and control.
In other words, the main mechanism of interaction
between electromagnetic waves and sea waves is the
Bragg scattering. This phenomenon originates from
waves which mutual distance is half wavelength and
traverse radially relative to the radar. Resonant scattering
of the first order has the effect of two dominant peak in
the Doppler spectrum, so called Bragg lines. Secondorder scattering is caused by the interaction between EM
wave and the other sea waves, which leads to side lobes
around the Bragg lines. These Bragg lines have negative
effect to the vessel detection, since their reflections are
quite strong and may mask reflections originating form
the vessels.

It is clear that HFSW radar needs to provide reliable


detection and stable tracking at great distances regardless
the weather conditions. This is very demanding task, with
multiple challenges and only one facilitating circumstance
target model.

3. ENVIRONMENTAL CHALLENGES
HFSW radar range is influenced by various factors. As
the most important, following factors could be chosen:
operating frequency, the electrical properties of the
propagation surface, season, time of a day and night, sea

All aforementioned factors must be carefully considered


during practical design of a system. Frequencies between
351

OTEH2016

HFSWRADARDESIGN:TACTICAL,TECHNOLOGICALANDENVIRONMENTALCHALLENGES

4.5 and 7 MHz yields best range / cost performance.


While, maximal range (200 nmi) is reached with
frequencies between 4.5 and 5 MHz, environmental
challenges, discussed above and size of a system,
discussed in [8], may suggest that the best range / cost
performance is achievable with frequencies between 5
and 7 MHz (maximal range greater than 100 nmi).

directed towards the Rx array. Tx antenna array radiation


pattern is shown in Picture 2.

4. TECHNICAL SOLUTIONS
In this paper proposed technical solutions for the
challenges and demands presented in previous chapters
are based on Frequency Modulated Continuous Wave
HFSW radar. According to our research FM signal forms
are superior to pulsed forms in HFSW applications. Main
reason lies in a fact that resolution cells at OTH distances
are quite large, while target model is not very dynamic
which implies that integration of a signal could be
performed for long time and thus better SNR
performances are achieved. Moreover, power amplifier
design is more easily achieved with solid state devices
since there is a need to achieve few kilowatts rather than
few hundreds of kilowatts.

Picture 2. Tx antenna array radiation pattern


vHF-OTHR transmits linear frequency chirps, where the
frequency shift f between the transmitted and received
echo determines the targets range is shown in Picture 3.

A continuously swept RF signal is transmitted, while


receiver is constantly receiving radar echo. In order to
achieve decoupling between transmitter and receiver, Rx
and Tx antennae must be installed on separate locations.
This results in very typical site geometry with two
separated antenna arrays, as illustrated in Picture 1. Tx
antenna array is a planar antenna array consisting of 4
antennae, while Rx array is classical uniformed linear
array and may contain between 4 and 16 antennae. This is
a typical configuration of HFSW radar produced in
Institute Vlatacom and it is denoted as Vlatacom HFOTHR (vHF-OTHR). More information regarding vHFOTHR and its site geometry may be found in [8] and [9].

The range cell depth is related to the bandwidth B of the


chirp signal. The range resolution is performed by the
Fourier transform of each single chirp signal. Thus the
range resolution is defined as:
R= c/2B

(1)

where c is the speed of light and B stands for the


frequency bandwidth of the chirp signal.

Dry land between Rx and Tx antenna arrays serves as


insulator which stifles direct EM wave towards Rx array.
This represents only one part of decoupling mechanism.

Picture 3. Transmitted and received signals


The values for the highest possible range resolution of the
radar system are limited by the available chirp signal
bandwidth (the width of the gaps in the radio spectrum).
To find the optimum radar operating frequency and
bandwidth, frequency scans are started regularly. In case
of a highly occupied radio spectrum, the bandwidth is
reduced. A typical bandwidth in the 8 MHz frequency
band is B = 100 kHz, which corresponds to a range
resolution of R=1.5km.

Picture 1. vHF-OTHRs typical site geometry


Since EZZ monitoring is primary goal of this system,
most of the Tx power is radiated towards the body of
water. Tx antenna array radiation pattern defines
surveillance area of the HSFW radar and additionally
decouples Rx and Tx arrays, since its minimum is

The azimuthal angle covered by HFSW radar is 60


perpendicular to the linear receive antenna array that
consists of a number of elements located along the shore.
352

HFSWRADARDESIGN:TACTICAL,TECHNOLOGICALANDENVIRONMENTALCHALLENGES

OTEH2016

The block diagram of the receiver section is given in


Picture 4.

Picture 6. Signal space after DFT [9]


Picture 4. Receiver section

3.

For each of the 16 antennas, baseband signal is obtained


by multiplying the received signal with the local oscillator
signal (equal to the transmitted signal). The following
steps use Notch filter and low pass filter with a cut-off
frequency of 1 kHz to limit the bandwidth of the signal.
Afterwards, the signals enter A/D converter blocks whose
outputs are forwarded to the Signal processing block.
The complex valued Fourier amplitudes of the chirps
determine the samples of the slowly varying modulation
of the backscattered signal, which contains the
information on the ocean surface variability. During
signal processing three types of processing are performed.
1.

Azimuth calculation In the third and final step,


the incident angle of planar reflected ray arrival
is calculated. By processing all the 16 real and
imaginary parts of signals, by using the
beamforming technique the angle between the
incident ray and the line perpendicular to the
antennae array is calculated for each pair of the
antennae. The signal-space obtained after this
processing is given in Picture 7.

Range calculation - the first step of signal


processing includes range determination. For the
duration of each chirp a number of equidistant
samples are collected and submitted for a FFT
transformation. In the spectral domain,
differences in frequency correspond to the range
of the potential targets. This transformation
yields the signal-space as defined in Picture 5.
Picture 7. Signal-space upon beamforming [9]
The following procedure yields a signal space that is now
ready to enter ship detection procedure.
The main interference contribution for the system is due
to the sea surface echo signals. In a range-Doppler
frequency map the so called Bragg lines are observed
permanently and limit the target detection performance.
The technical challenge is to detect targets in this strong
interference environment and to control the false alarm
probability. In addition to this, target RCS in HF band is
quite different from target RCS in microwave bands,
which makes an already hard situation even more
complicated. The problem lays in a fact that vessel RCS
in HF band have resonant nature, rather than optical (like
in microwave bands), hence RCS could not be easily
described, although it may be estimated [10].

Picture 5. Signal space after FFT [9]


2.

Doppler frequency calculation - To suppress the


frequency offset due to the range of the target
and extrapolate the frequencies relevant to the
movement of the potential targets, a collection of
samples is taken. Namely, one value from each
chirp at the same position in the sequence of
chirps is used to calculate a single Doppler
frequency image, by calculating DFT of those
values. This way the signal-space is transformed
into a image as shown in Picture 6.

Classical target detection using a constant false-alarm rate


(CFAR) algorithm follows the beam forming and fast
Fourier transform in a HF radar data processing chain.
The typical HF echo signal environment is shown in
Picture 8. and illustrates the complexity of the ship
detection process.
353

HFSWRADARDESIGN:TACTICAL,TECHNOLOGICALANDENVIRONMENTALCHALLENGES

OTEH2016

this type of application were discussed in [11]. In the end,


no matter which principle is applied, the ultimate goal is
always the same - the formation of a radar network with
the minimal level of false alarms and the most precise
determination of the targets parameters.
In addition to the requirement of uniformity of the
obtained data, the radar network must meet another
important factor. They must be designed in a way which
covers as much of the surveillance zone as possible.
In Picture 9, one example of nation EEZ and vHF-OTHR
network for control of that EEZ is shown. The black line
represents the EEZ border, while vHF-OTHRs
surveillance areas are marked with blue circular clippings.
If the radar network is formed as shown in Fig. 9, there
are clearly gaps in the surveillance area (shown in red).
The gaps located near the coast are easily monitored by
maritime short-range radars. On the other hand, gap at the
end of surveillance area can not be covered by available
sensors. However, vessel located in that zone sooner or
later must enter the surveillance area of a neighboring
vHF-OTHR, and will be detected in a timely manner.

Picture 8. HF radar range-Doppler map [9]


Most of the observed signal components are labelled in
Picture 8, which includes sea clutter (resonant scattering
of the first and second order), ionosphere interference,
radio interference, and several ship and aircraft targets.
Due to the complexity of the real signal environment the
algorithm employs adaptive procedures in the detection
process to adjust to the varying clutter, noise and
interference levels.
In order to perform all signal processing tasks in real
time with different vHF-OTHR configurations,
reprogrammable hardware imposes itself as a solution.
Moreover, reprogrammable hardware eases tuning for a
particular environment and simplifies future upgrades.
Platform needed for real-time calculation of all
aforementioned processing is built on sophisticated, but
commercially available components, thus providing highend performance at affordable costs.

5. HFSW RADAR NETWORKS


Picture 9. Example of HFSW radar network

One radar is often not sufficient to determine the precise


position of the targets within the large coverage area, such
as EEZ. Therefore, radar networks that are composed of
multiple HFSW whose radar surveillance zone partially
overlap are a must [2, 11 and 12].

At the end, during network planning following parameters


must be carefully considered:

In addition to large coverage area, radar network can


improve the target detection accuracy, but it also may lead
to the false alarm raising.
There are two principles which reduce the level of false
alarms, sectoral radar network organization or the use of
data fusion algorithms. The first approach effectively
excludes the occurrence of false alarms because the
complete radar network is divided into sectors in which
the data can be accepted only from the radar assigned to
the specific sector. The downside of this approach is the
inconsistency of target tracking, as it comes to a small
delay in tracking when the target moves from one sector
to another sector. Second approach utilizes algorithms for
data fusion, false alarm rise can not be completely
avoided, but it can be reduced to a practically negligible
size. On the other hand, data fusion algorithms may
improve detection and parameter determination accuracy,
because they take into account data obtained from all
radars which detect the target. Some fusion algorithms in

Site availability HFSW site occupies large area,


which may be a private property and thus
unavailable for HFSW installation.

Infrastructure in developing countries road and


electrical network is quite unavailable at remote
places.

Frequency availability in developed countries this


may be a major problem, especially in heavily
populated areas.

Integration and availability of AIS data While in


developed countries availability and technical
correctness of AIS transceivers is not questionable,
in developing countries it is not a case and
integration of AIS data of questionable quality must
be performed in order to minimize false alarms [13].

Organized like it is explained, radar network minimizes


sensors imperfections, thus minimizing tactical impact
due to gaps present in surveillance area. Overall result is
cost effective monitoring and control of nation-wide EEZ.
354

HFSWRADARDESIGN:TACTICAL,TECHNOLOGICALANDENVIRONMENTALCHALLENGES

OTEH2016

[2] Barca,P.,
Maresca,S.,
Grasso,R.,
Bryan,K.,
Horstmann,J.: Maritime Surveuillance with Multiple
Over-the-Horizon HFSW Radars: An Overview of
Recent Experimentation, IEEE Aerospace and
Electronic Systems magazine, 30(12) (2015), 4 19.
[3] Skolnik,M.I.: Radar Handbook 3rd edition, McGrawHill, Inc., 2008.
[4] Electrical Characteristics Of The Surface Of The
Earth, Recommendation ITU-R P.527-3, CCIR,
Geneva, 1992.
[5] Ground Wave Propagation Curves For Frequencies
Between 10 KHz and 30 MHz, Recommendation
ITU-R P.368-9, 2007.
[6] Spaulding,A.D., Washburn,J.S.: Atmospheric Radio
Noise: Worldwide Levels and Other Characteristics,
NTIA Report 85-173, April 1985.
[7] Barick,D.E.: Theory of Ground-Wave Propagation
Across A Rough Sea at Decameter Wavelengths,
Battelle Memorial Institute, 1970
[8] Tosic,N.,
Dzolic,B.,
Nikolic,D.,
Lekic,N.,
Todorovi,B.: Izazovi pri projektovanju HFSW
radara, ETRAN 2016, Zlatibor, SR, June 2016.
[9] vHF-OTH Radar System Design Document, Institute
Vlatacom, December 2014, internal company
standard document
[10] Lekic,N., Nikolic,D., Milanovic,B., Vucicevic,D.,
Valjarevic,A., Todorovic,B.: Imapact of Radar Cross
Section on HF Radar Surveillance Area: Simulation
approach, Proc. of 2015 IEEE Radar conference,
Johannesburg, RSA, 2015.
[11] Nikolic,D., Popovic,Z., Borenovic,M., Stojkovic,N..,
Orlic,V., Dzvonkovskaya,A., Todorovic,B.: MultiRadar Multi-Target Tracking Algorithm for
Maritime Surveillance at OTH Distances, IRS 2016,
Krakow, PL, 11-15 may 2016.
[12] Anderson,S.J.: Optimizing HF Radar Siting for
Surveillance and Remote Sensing in the Strait of
Malacca, IEEE Transactions of Geoscience and
Remote Sensing, Vol.51, No.3, 1805-1816. March
2013.
[13] Nikolic,D.,
Stojkovic,N.,
Lekic,N.,
Orlic,V.,
Todorovic,B.: Integration of AIS data and HF
OTHR tracks in unfavorable environment at OTH
distances, Proc. of IcETRAN 2016, Zlatibor, SR,
June 2016.

6. CONCLUSION
HFSW radars have their place in the integrated maritime
surveillance system as a sensor that can monitor over
the horizon areas at an affordable price. Design,
development, installation, and exploitation of HFSW
radar set out a series of engineering and organizational
challenges
First of all, the requirements of HF propagation of surface
waves, and the impact of unfavorable environment (very
strong atmospheric noise, Galactic noise and man-made
noise) must be taken under consideration. The carrier
frequency selection has a tremendous impact on a final
operation of whole system. Lower frequencies yield a
greater coverage, but that coverage comes at a price of a
size of a system which increases area needed for system
installation and the volume of construction work needed
to successfully install a system. Moreover, noise is
stronger at lower frequencies and more Tx power is
needed to achieve good SNR. At the end in order to detect
vessels in a sea clutter, strict requirements are placed
regarding receivers sensitivity, its dynamics and high
stability of the FMCW and low phase noise characteristic
of a whole system. Finally, one HFSW is often not
enough to cover whole EEZ and network of HFSW must
be designed in order to have complete EEZ monitoring,
which also increases cost of a system.
However, there is a way to overcome these challenges at
price which is still more than acceptable, especially if it is
compared to the price of an alternative EEZ monitoring
solutions (MW / optical sensors and needed platforms). In
this paper tactical, technical and environmental challenges
during HFSW radar design are presented. Guidelines for
their overcoming are shown through vHF-OTHR
perspective. Recommendations for system design,
frequency allocation, optimal signal modulation and TX
power as well as other systems parameters are carefully
considered and discussed accordingly. Also, the most
important aspects during HFSW radar network design are
highlighted. Taking into account all these aspects,
together with careful system design of each radar in the
network leads to complete EEZ monitoring.

References
[1] Fabrizio,G.: High Frequency Over-the-Horizon
Radar: Fundamental Principles, Signal Processing,
and Practical Applications, McGraw-Hill, inc.,
2013.

355

EFFECTIVENESS OF ACTIVE VIBRATION CONTROL OF A FLEXIBLE


BEAM USING A DIFFERENT POSITION OF STRAIN GAGE SENSORS
MIROSLAV JOVANOVI
Technical Test Centre, Belgrade, mjovano@sbb.rs
ALEKSANDAR SIMONOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, asimonovic@mas.bg.ac.rs
NEBOJA LUKI
Technical Test Centre, Belgrade, nesaluca@ptt.rs
NEMANJA ZORI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, nzoric@mas.bg.ac.rs
SLOBODAN STUPAR
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, sstupar@mas.bg.ac.rs
SLOBODAN ILI
Technical Test Centre, Belgrade, slobodan.ili@vs.rs

Abstract: This paper presents experimental determination effectiveness of active vibration control system depending on
sensor location. The active structure consists of an composite beam as the host structure, strain gages as the sensors
and piezoceramic patch as the actuation element. The Wheatstone bridge, sensor platform, is used to measure the strain
of mechanical system which represents the control signal in system. The active vibration control system is controlled by
PID control strategy. Control algorithm was implemented on the PIC32MX440F256H microcontroller platform. For
the experiment two locations for sensor platform are chosen. In order to determine the influence of piezoelectric
ceramic actuator work on sensor (control) signal the tip deflection of beam is measured by high speed camera. The
active vibration control system effectiveness of both sensor and actuator positions are considered. Experimental results
demonstrate that the presented control method is effective for both locations but the stability of the system can be
violated.
Keywords: Active vibration control, PID controller, strain gages, PZT actuator, composite beam.
sensors: strain gages, piezoelectric materials, shape
memory alloys, electro-strictive materials, magnetostrictive materials, electro-rheological fluids and fiber
optics [3]. The use of piezoelectric sensors and actuators
for active vibration control of flexible structures has
attracted much attention in recent years [4, 5, 6]. However
the use of other sensors platform is not excluded for
different types of active vibration control. Shin [7] used
the accelerometers as the feedback sensors. Hu [2] and
Jovanovi [8] investigated the system of active vibration
control with strain gages feedback sensors.

1. INTRODUCTION
The presence of vibrations is a common problem in
mechanical light structures and may result in instability,
decreased performance, and can also lead to catastrophic
failure [1]. In order to minimize the undesirable effect of
structural vibrations, the smart structures have attracted
much more attention in recent years because they have
increased many properties of structures and retained their
flexibility and adaptability.
Recently, there has been a considerable interest in the area
of structural vibrations active control by using
piezoelectric materials as sensors and actuators. The
development of this area has been started with
mathematical models of system element integrations. A
smart structure can be defined as a structure with bonded
or embedded sensors and actuators as well as an
associated control system, which enables the structure to
respond simultaneously to external excitations on it and
then suppresses undesired effects or enhances desired
effects [2]. Many materials are used as actuators and

The present work investigated the effectiveness of active


vibration control system of a smart cantilever beam with
different strain gage positions. The proposed application
setup consists of a composite cantilever beam with a
fiber-reinforced piezoelectric actuator and strain gage
sensors. The strain gages signal amplifier is developed
and integrated to main board. The proportional-integraldifferential (PID) control algorithm is implemented on a
PIC32-PINGUINO-OTG board, with an integrated
PIC32MX440F256H microcontroller [8, 9]. The effects of
356

EFFECTIVENESSOFACTIVEVIBRATIONCONTROLOFAFLEXIBLEBEAMUSINGADIFFERENTPOSITIONOFSTRAINGAGESENSORS

OTEH2016

hA = 0.76 mm , which is bonded on top side and 25 mm


away from the clamped edge.

vibration damping and shape of sensor signal are


investigated on two sensor locations and the effectiveness
of whole system is determinated on tip displacement of
composite beam. The experiment considers vibration
control under periodic excitation and experimental results
of composite beam tip displacement. Different positions
of a strain gage signal shape are compared.

Resistive strain gages are the most common types of the


position sensor used for control of piezoelectric actuators.
They are often integrated into the piezoelectric stack
actuator for position feedback. Strain sensors can also be
used in other piezoelectric actuators simply by bonding
the sensor to the actuator surface or in the vicinity of the
actuator. The Wheatstone bridge is usually used to
measure the strain of mechanical system.

2. EXPERIMENTAL TEST-BAD
The active vibration system with PID control algoritam is
developed by authors [8, 9]. The previously research are
conducted on metal and composite structures and
confirmed the proposed active vibration system with aim
to increase damping of the active structure. In order to
select appropriate location for sensor position at the
composite beam and its influence on effectiveness of
active vibration system the two locations of strain gage
sensor for experimental research are choosen.

The use of Wheatstone full bridge sensor platform has the


following advantages: normal strains are compensated,
thermal strains are compensated to a high degree,
interference effects through internal bridge connections
are largely suppressed and the change in AC voltage level
is very suitable for measuring the signal of vibration [11].
Two sets of 120 Wheatstone full bridge are bonded at
25 mm and 90 mm from the clamped edge. At the free
end the aluminum mount with pulley is integrated, which
is connected across the rubber belt with eccentric pulley
driven with electric motor. The active structure is
presented in Pic. 1(a), the dimensions and positions of all
components are given in Pic. 1(b).

The composite beam was designed and manufactured


using carbon/epoxy materials. During the lay-up
procedure, 5 plies of T300J carbon/epoxy orientation is
(0/-45/0/45/0) stacked and cured in vacuum for
24 hours. The geometrical size of the beam is: length
Lb = 310 mm , width Wb = 60 mm , and thickness
hb = 1.5 mm .

All sensors and actuator are symmetrically located about


the y-axis, as shown in Pic. 1.

Piezoelectric PZT patch actuator (MIDE QP20w) with


length LA = 50.8 mm , width WA = 38 mm , and thickness

Picture 1. Cantilever composite beam with integrated piezoelectric actuator and strain gage sensors: a) model of test
rig, b) dimensions and positions of smart composite beam
357

EFFECTIVENESSOFACTIVEVIBRATIONCONTROLOFAFLEXIBLEBEAMUSINGADIFFERENTPOSITIONOFSTRAINGAGESENSORS

The control system is implemented on a PIC32PINGUINO-OTG


board,
with
an
integrated
PIC32MX440F256H microcontroller. The full bridge
strain gage output signal from position 1 or 2 are
amplified with instrumentation amplifier (AD623AN)
with matched gain of 200 to the voltage range of 1V to
+1V , and converted into digital data through an A/D
(analog to digital) converter (LTC1864, 16-bits). The
output of the PID controller is sent to the voltage
amplifier for the PZT actuator, through a D/A (digital to
analog) convertor (DAC8523, 16-bits). The piezoelectric
actuator is driven by a high voltage amplifier (PDA X3),

OTEH2016

which amplifies a low voltage signal in the range -5Vto


+5Vto a high voltage signal in the range -200V to +200V.
The sampling period (ts) of the controller is selected as
ts=1ms.
In the experiment, the electric motor excited the active
beam at first mode. At first mode the strain of beam in
root are maximal values and the strain gage outputs ( ySG )
are maximal values in volts.
Pic. 2 presents the vibration active damping experimental
setup considered in this study.

Picture 2. System of active vibration contro


the time-domain response measured by strain gages
sensor output ( y SG ) are shown in Pic. 3(a). And the
frequency response of the first two bending modes can be
obtained by employing the fast Fourier transform (FFT),
as shown in Figure 3(b). From Pic. 3(b), it can be seen
that the frequencies of the first two bending modes are
f1 = 8.51 Hz and f 2 = 30.83 Hz , respectively.

3. EXPERIMENTAL PARAMETERS
IDENTIFICATION
To identify the dynamic characteristics of the cantilever
carbon/epoxy beam, the impulse hammer method is
applied to perform experimental modal test. To excite the
bending vibration, the beam was hit with a hammer at the
free end. After excitation for the first two bending modes,

Picture 3. The first two bending modes vibration measured by strain gages: (a) time-domain response and (b)
frequency-domain response

358

EFFECTIVENESSOFACTIVEVIBRATIONCONTROLOFAFLEXIBLEBEAMUSINGADIFFERENTPOSITIONOFSTRAINGAGESENSORS

OTEH2016

measured signal of strain gage outputs has a sinusoidal


shape too and its maximum magnitude is y SG 2 = 0.45 V .
The differences in magnitude between measured signals
are the result of strain gage positions. For positions which
are closer to the root the mechanical strain has a higher
value and in accordance with strain gage linearity the
measured strain gage output voltage has the higher values.
The distance between strain gages from 65 mm reduced
the strain gage output for ySG = ySG1 ySG 2 = 0.13V .
The strain gage output at position SG2 is reduced for
about 22% in relation to the maximum magnitude at
position SG1.

In order to analyze the beam strain in case of forced


vibration, the electric motor excited the active beam at
first natural frequency, at f1 = 8.51Hz . The strain gage
outputs are measured in volts after instrumentation
amplifier (AD623). The tip displacement, yTD , of beam
is measured in millimeters with high speed camera
(Olympus i-Speed 3 with matched software). The phase
lag angle between the output signals measured by strain
gages sensors are not considered in experiments.
The beam strain gage output ( y SG1 ) for an open loop
system (without active control) at position of strain gage 1
(SG1) is given in Pic. 4(a). The measured signal of strain
gage outputs has a sinusoidal shape and its maximum
magnitude is ySG1 = 0.58 V . The beam strain gage output
( y SG1 ) for open loop system (without active control) at
position of strain gage 2 (SG2) is given in Pic. 4(b). The

The tip displacement, yTD , in the case of open loop


system (without active control) is the same and is given in
Figure 4(c) and 4(d). The maximum magnitude of tip
displacement is yTD = 7.6 mm with sinusoidal shape.

Picture 4. The beam strain gage output and tip displacement: (a) strain gage output at position of SG1, (b) strain gage
output at position of SG2, (c) and (d) tip displacement of the beam

4. EXPERIMENTAL DETERMINATION OF
ACTIVE VIBRATION CONTROL

vibration suppression. Those corrections must be


carefully tuning without any perturbation of stability
margin.

In order to verify the effectiveness of the two different


strain gages (Wheatstone full bridge) sensors based
control method, experimental comparison research was
conducted with PID control for the first bending mode.

In accordance with previous works of authors [8, 17] the


best PID gains for SG1 and SG2 are considered:
K pSG1 = 2.4 ,
K iSG1 = 0.0035 ,
K dSG1 = 35
and

K pSG2 = 3.1 , K iSG2 = 0.0085 , K dSG2 = 45 .

Proportional, K p , integral, K i , and derivative gains,

The time-domain closed-loop vibration response with


continually excitation ( Fe (t ) = const. ) of the first bending
mode dampened by the PID control algorithm for SG1

K d , of the PID controller are obtained by using the


Ziegler-Nichols method for both sensor positions. Manual
corrections for PID factors are preferable for maximal
359

EFFECTIVENESSOFACTIVEVIBRATIONCONTROLOFAFLEXIBLEBEAMUSINGADIFFERENTPOSITIONOFSTRAINGAGESENSORS

and SG2 is shown in Figure 5.

OTEH2016

The strain gage output, y SG 2 , from position 2 is given in


Figure 9(b). The magnitude, y SG 2 , is not constant and
varies between 0.033V ySG1 0.036 V . Compared
strain gage outputs for position 2 in the open and closed
loop, it can be concluded that the magnitude is decreased
by 92%.

The strain gage output, y SG1 , from position 1 is given in


Figure 5(a). The magnitude, y SG1 , is not constant and
varies between 0.045 V ySG1 0.047 V . Compared
strain gage outputs for position 1 in the open and closed
loop, it can be concluded that the magnitude is decreased
by 91.89%.

Picture 5. The beam strain and tip displacement with active vibration control: ((a) strain gage output at position of SG1,
(b) strain gage output at position of SG2, (c) tip displacement for control signal at position 1, and (d) ) tip displacement
for control signal at position 2
vibration suppression in active control has a better results
for control signal from SG2.

From Pic. 5(a) and 5(b) it is obvious that the signal at


position 2 has a lower values, what is in accordance with
experimental results presented in this paper for open loop.
The signal at position 1 has a remarkably noise at peaks.
The noise represents interference of higher frequencies
from actuator work to strain gages. The noise at the
magnitude peaks is not visible for strain gage output at
position 2.

The frequency-domain closed-loop vibration response of


the first bending modes for both strain gage outputs is
shown in Pic. 6.
In the control voltage plots as shown in Pic. 6., the SG2
signal has given better results in active vibration control.
The effectiveness in vibration suppression is higher than
the control with signal SG1 in relation to tip
displacements. However, the control with strain gages
positioned outside the actuator area with position sensor
feedback control can suppress the vibration effectively
but it can involves the instability in system. The
magnitude at second bending mode has increased. In this
case the stability margin of active vibration control
system is perturbed, at second mode, appears of spillover
effect. The spillover effect will not lead to full instability
of system, but its appear in the system will reduce the

The tip displacement, yTD , in the case of closed loop


system (with active control) is the different in relation to
the position of the selected control signal (SG1 or SG2)
and is given in Pic. 4(c) and 4(d). The tip displacement
for control signal from position 1 is in the range of
0.8 mm yTD 0.8 mm , Pic. 5(c), in relation to the tip
displacement for control signal from position 2, which is
in range of 0.6 mm yTD 0.6 mm , Pic. 5(d). In
accordance with given results it can be concluded that the
360

EFFECTIVENESSOFACTIVEVIBRATIONCONTROLOFAFLEXIBLEBEAMUSINGADIFFERENTPOSITIONOFSTRAINGAGESENSORS

performances of active vibration control system. With


adequate tuning of PID coefficient this effect can be
reduced with retaining good damping characteristics.

OTEH2016

with a greater distance between actuator and sensor has a


tendency to spillover effect.
To compensate the spillover effect of the system, manual
gain coefficients of PID controller is preferably, with aim
to increase the whole system effectiveness and to decrease
negative influence of residual modes.
References

[1] Worden,K., Bullough,W.A., Haywood,J.: Smart


Technologies, World Scientific, Singapore, 2003.
[2] Hu,J., Zhu,D.: Vibration Control of Smart Structure
Using Sliding Mode Control with Observer, Journal
of Computers 7 (2012) 411-418.
[3] Her,S.C., Lin,C.S.: Vibration Analysis of Composite
Laminate Plate Excited by Piezoelectric Actuators,
Sensors 13 (2013) 2997-3013.
[4] Qiu,Z.C., Zhang,X., Wu,H., Zhang,H.: Optimal
placement and active vibration control for
piezoelectric smart flexible cantilever plate, Journal
of Sound and Vibration 301 (2007) 521543.
[5] Shen,Y., Homaifar,A.: Vibration Control of Flexible
Structures with PZT Sensors and Actuators Journal of
Vibration and Control 7 (2001) 417-45.
[6] Le,G., Lu,Q., Fei,F., Liu,L., Liu,Y., Leng,J.: Active
vibration control based on piezoelectric smart
compositeSmart Materials and Structures 22(12)
(2013) 125032
[7] Shin,C., Hong,C., Jeong,W.B.: Active vibration
control of beam structures using acceleration
feedback control with piezoceramic actuators,
Journal of Sound and Vibration 331 (2012) 1257
1269.
[8] Jovanovi,M.M.,
Simonovi,A.M.,
Zori,N.D.,
Luki,N.S., Stupar,S.N., Ili,S.S.: Experimental
studies on active vibration control of a smart
composite beam using a PID controller, Smart
Materials and Structures 22(11) (2013) 115038
[9] Simonovi,A.M.,
Jovanovi,M.M.,
Luki,N.S.,
Zori,N.D., Stupar,S.N., Ili,S.S.: Experimental
studies on active vibration control of smart plate
using a modified PID controller with optimal
orientation of piezoelectric actuator, Journal of
Vibration and Control 22(11) (2016) 2619-2631.
[10] Hoffmann,K.: An Introduction to Measurements
using Strain Gages, Darmstadt: Hottinger Baldwin
Messtechnik GmbH, 1989.

Picture 6. The beam RMS strain gage output for both


positions of strain gages in closed loop.
The results and effectivness of active vibration control
system for two sensor position are sumarized in Table 1.

Beam tip
displacement
with control

Sensor output
with control

Beam tip
displacement
with control

Effectiveness

Damping

[V]

[mm]

[V]

[mm]

[%]

[%]

SG 1

0.58

7.6

0.047

0.8

91.9

89.5

SG2

0.45

7.6

0.036

0.6

92

92.1

Sensor
location

Sensor output
without
control

Table 1. The effectiveness of active vibration control


system

5. CONCLUSION
This paper presents experimental results of active
vibration suppression of a flexible beam with bonded PZT
patch actuators and mounted two full Wheatstone bridges
for sensor platforms. Position sensor based proportionalintegral-differential (PID) feedback is used to actively
control the vibration of a beam, and their stability is
experimentally verified. The higher effectiveness in active
vibration control system is achieved with sensor
positioned away from the actuator. The active vibration
control system with position feedback involves negative
occurrence when the control sensor is set close to the
actuator boundary, the high frequencies of actuator work
involves to sensor control signal. However, the system

361

INFLUENCE OF GEOMETRICAL PARAMETERS ON PERFORMANCE OF


MEMS THERMOPILE BASED FLOW SENSOR
DANIJELA RANDJELOVI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
danijela@nanosys.ihtm.bg.ac.rs
OLGA JAKI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
olga@nanosys.ihtm.bg.ac.rs
MILE M. SMILJANI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
smilce@nanosys.ihtm.bg.ac.rs
PREDRAG POLJAK
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
predrag.poljak@nanosys.ihtm.bg.ac.rs
ARKO LAZI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
zlazic@nanosys.ihtm.bg.ac.rs

Abstract: The aim of this work is to study how performance of thermal flow sensors depends on variation of specific
geometrical parameters. Self-developed 1D analytical model was applied at a MEMS sensor based on Seebeck effect. The
main elements of the analysed structure are: p+Si/Al thermocouples, p+Si heater, thermally and electrically isolating
membrane and residual n-Si layer in membrane area. Two thermopiles consisting of N thermocouples are placed
symmetrically at both sides of the heater. In this type of flow sensor output signal is obtained as a difference between the
Seebeck voltages generated at the downstream and upstream thermopile. It was assumed that sensor is placed in the
constant air flow. Several parameters of interest were calculated including flow induced temperature difference established
between the downstream and upstream thermopile, output voltage and sensitivity. Simulations were performed in order to
analyse dependence of these parameters on residual n-Si layer thickness (dnSi), distance between the hot thermopile
junctions and the heater (l) and thermocouple width (wTP) and length (lTP). Simulation results show that sensitivity of the
thermal flow sensor is improving with increasing l and lTP. On the other hand, performance of the sensor will also increase
if dnSi or wTP are decreasing.
Keywords: MEMS, flow sensor, thermopile, analytical modelling.
The first part of the paper describes geometry, basic
elements and principle of operation of ICTM thermal flow
sensor. The second part of the paper gives an overview of
analytical model and general expressions for the main
parameters of a thermal flow sensor. Next, simulation
results are presented followed by discussion.

1. INTRODUCTION
Operation of thermopile based MEMS sensors relies on
Seebeck effect which represents transformation of
temperature difference to voltage. Therefore, they belong to
thermal type of sensors. One of the main advantages of
thermopile based sensors is versatility of applications [1, 2]
such as: vacuum sensors [3-6], gas type sensors [7-9],
thermal converters [5,10], IC detectors [11], accelerometers
[12], flow sensors [5,13-15].

2. BASIC ELEMENTS AND PRINCIPLE OF


OPERATION OF THERMAL FLOW
SENSOR

For the purposes of studying performance of multipurpose


thermopile sensors developed at ICTM, analytical model
was developed. The aim of this work is to implement selfdeveloped 1D analytical model to study how performance of
thermal flow sensors depends on variation of specific
geometrical parameters.

Basic elements of studied thermal flow sensor are shown in


Picture 1. A p+Si heater is in the central part of the chip,
while there are two thermopiles each consisting of 30
p+Si/Al thermocouples placed laterally from the heater. All
these elements lay on thermally isolating membrane which

362

OTEH2016

INFLUENCEOFGEOMETRICALPARAMETERSONPERFORMANCEOFMEMSTHERMOPILEBASEDFLOWSENSOR

is formed using bulk micromachining and a special postetching technique [5]. The membrane consists of sputtered
SiO2 of fixed thickness (1 m) and residual n-Si layer with
variable thickness.

3. OVERVIEW OF ANALYTICAL MODEL AND


BASIC PARAMETERS OF A THERMAL
FLOW SENSOR
General analytical model for multipurpose thermal sensors
was presented in [5]. Based on this core model, specific
models were developed depending on sensors application:
flow sensor [15], vacuum sensor [4, 6], gas sensor [7-9].
Since details on analytical modelling of thermal flow
sensors are given in detail in [15], only overview of the
expressions for the parameters of interest will be given here.
Crucial parameter for thermopile-based sensors is
temperature difference established between hot and cold
thermocouple junctions, T, which, in general, depends on
ambient temperature and pressure and the power developed
at the heater. We will assume that all these parameters are
constant. The sensor is placed at fixed room temperature
and atmospheric pressure, while the power generated at the
p+Si heater is kept constant. On the other hand, temperature
difference depends on various geometrical parameters like:
1) residual nSi thickness, dnSi, 2) distance between the heater
and the hot thermopile junctions, l, 3) thermocouple width,
wTP, 4) thermocouple length, lTP.

Picture 1. Photograph of a flow sensor chip with p+Si


heater and two thermopiles consisting of p+Si/Al
thermocouples.

As explained previously, in thermal flow sensors, basic


parameter is flow induced temperature difference, Tf,
which is established between the hot junctions of the
downstream and upstream thermopile

Picture 2 illustrates principle of operation of a thermopilebased flow sensor. When the sensor is not in the fluid flow,
temperature profile established on the chip due to the power
generated at the heater is symmetrical. Operation of thermal
flow sensor is based on thermal interaction between the
moving fluid and the sensor. Fluid flow is reflected in
asymmetry of the temperature profile Tf(uf). This
asymmetry is caused by increase of the Seebeck voltage of
the downstream thermopile and decrease of the Seebeck
voltage of the upstream thermopile.

T f (d nSi , l , wTP , lTP ) =


Tdownstream Tupstream ( d , l , w
nSi

(1)

TP ,lTP )

Specific parameters of thermal flow sensors can be deduced


based on flow induced temperature difference. As explained
before, the output signal of this type of flow sensors is
obtained as difference between Seebeck voltages established
at the downstream and upstream thermopile. Assuming that
one thermopile consists of N thermocouples with Seebeck
coefficient , output voltage can be written as

U (d nSi , l , wTP , lTP ) = NT f (d nSi , l , wTP , lTP ) (2)


Absolute sensitivity of thermal flow sensor, S, is calculated
as ratio of the output voltage and square root of fluid flow
velocity, uf
S = U .
uf

(3)

Another important parameter is normalized sensitivity,


Snorm, which is obtained when absolute sensitivity is divided
by power generated at the heater, P
Snorm = S = U .
P P uf

Picture 2. Principle of operation of thermopile-based


flow sensor: due to fluid flow temperature profile
becomes asymmetrical while simultaneously Seebeck
voltage of the upstream thermopile drops while that of
the downstream rises.

(4)

reflects also in the Seebeck voltages generated at the


thermopiles leading to this expression:
U (d nSi , l , wTP , lTP ) = N T (d nSi , l , wTP , lTP )

363

(5)

OTEH2016

INFLUENCEOFGEOMETRICALPARAMETERSONPERFORMANCEOFMEMSTHERMOPILEBASEDFLOWSENSOR

4. SIMULATION RESULTS
As mentioned before, simulations were performed under
assumption that the sensor is placed at fixed room
temperature and at atmospheric pressure. The power
generated at the p+Si heater is also kept constant and it
equals 95 mW. All calculations were done for sensor placed
in laminar air flow velocity is 5 m/s.
Pictures 3 and 4 show results obtained for thermal sensor
with wp+Si = 60 m and wAl = 40 m. Basic sensor's
parameters were calculated as a function of residual nSi
thickness, dnSi, and the distance between the heater and the
hot thermopile junctions, l.
a)

a)

b)
Picture 4. Normalized sensitivity of the thermal flow
sensor (wp+Si = 60 m, wAl = 40 m, ) as a function of the
residual nSi thickness, dnSi, and the distance between the
heater and the hot thermopile junctions, l, for t wo
regions: a) dnSi < 5 m and b) dnSi = (5 - 10) m.

The next set of simulations considered influence of variation


of thermocouple width on sensors performance. For these
calculations it was assumed that p+Si and Al stripes forming
the thermocouples have equal width, wp+Si = wAl, soin that
case thermocouple width is denoted by wTP. It was assumed
that the residual nSi thickness is dnSi = 5 m.

b)

c)
Picture 3. Simulation results obtained for thermal sensor
with wp+Si = 60 m and wAl = 40 m. Following
parameters were calculated as a function of residual nSi
thickness, dnSi, and the distance between the heater and
the hot thermopile junctions, l: a) Temperature
difference established between downstream and upstream
thermopile, b) Voltage difference between downstream
and upstream thermopile, c) Flow sensor sensitivity.

a)

364

OTEH2016

INFLUENCEOFGEOMETRICALPARAMETERSONPERFORMANCEOFMEMSTHERMOPILEBASEDFLOWSENSOR

b)

b)

c)
c)

Picture 6. Temperature difference between downstream


and upstream thermopile (a), Voltage difference between
downstream and upstream thermopile (b) and Flow sensor
sensitivity (c) as functions of thermocouple width,
wTP (= wp+Si = wAl), and thermocouple length, lTP. l. It
was assumed that the residual nSi thickness is dnSi = 5 m
and that distance between the heater and the hot
thermopile junctions is l = 20 m.

Picture 5. Temperature difference between downstream


and upstream thermopile (a), Voltage difference between
downstream and upstream thermopile (b) and Flow sensor
sensitivity (c) as functions of thermocouple width,
wTP (= wp+Si = wAl), and the distance between the heater
and the hot thermopile junctions, l. It was assumed that
the residual nSi thickness is dnSi = 5 m.

On the other hand, in Picture 6 graphs show the same


parameters calculated when apart from the width of
thermocouples, their length, lTP, is also changing. It was
assumed that the distance between the heater and the hot
thermopile junctions is kept constant, l = 20 m.

Picture 5 shows dependences of temperature difference


between downstream and upstream thermopile, voltage
difference between downstream and upstream thermopile
and flow sensor sensitivity as functions of thermocouple
width, wTP, and the distance between the heater and the hot
thermopile junctions, l.

5. CONCLUSION
Self-developed analytical model was applied for
examination of influence of variation of the chosen
geometrical parameters on performance of thermopile-based
flow sensor. Output signal of thermal flow sensor was
obtained as a difference between the Seebeck voltages
generated at the downstream and upstream thermopile. It
was assumed that sensor is placed in the constant air flow.
Several parameters of interest were calculated including
flow induced temperature difference established between
the downstream and upstream thermopile, output voltage
and sensitivity. Simulation results showed that sensitivity of
the thermal flow sensor is improving when the distance
between the hot thermopile junctions and the heater, l, or
the length of the thermocouples, lTP, is increasing. Similar

a)
365

INFLUENCEOFGEOMETRICALPARAMETERSONPERFORMANCEOFMEMSTHERMOPILEBASEDFLOWSENSOR

OTEH2016

Conference on Microelectronics MIEL 2012, Ni,


Serbia, May 13-16, (2012) 147-150.
[8] Randjelovi, D., Lazi, ., Jaki, O., VasiljeviRadovi, D., "Analytical Modelling of Hydrogen
Sensing Using IHTM Thermopile Based MEMS
Multipurpose Sensors", Proc. 5th Int. Scientific
Conference on Defensive Technologies OTEH 2012,
Belgrade, Serbia, 2012, 662-667.
[9] Randjelovi, D., Jaki, O., Smiljani, M.M., Lazi, .,
"Study of possibilities of application of a thermopilebased gas sensor", Proc. 6th Int. Scientific Conference
on Defensive Technologies OTEH 2014, Belgrade,
Serbia (2014) 519-523.
[10] Klonz, M., Laiz, H., Kessler, E., "Development of
Thin-Film Multijunction Thermal Converters at
PTB/IPHT", IEEE Transactions on Instrumentation
and Measurement, 50(6) (2001) 1490-1498.
[11] Graf, A., Arndt, M., Sauer, M., Gerlach, G., "Review
of micromachined thermopiles for infrared detection",
Measurement Science and Technology, 18 (2007) R59R75.
[12] Dauderstadt, U. A., de Vries, H.S., Hiratsuka, R.,
Sarro, P.M., "Silicon accelerometer based on
thermopiles", Sensors and Actuators A, 46-47(1-3)
(1995) 201-204.
[13] Van Oudheusden, B. W., "Silicon thermal flow
sensors", Sensors and Actuators A, 30 (1992) 5-26.
[14] Kaltsas, G., Nassiopoulou, A. G., "Novel C-MOS
compatible monolithic silicon gas flow sensor with
porous silicon thermal isolation", Sensors and
Actuators A, 76(1-3) (1999) 133-138.
[15] Randjelovi, D., Djuri, Z., Petropoulos, A., G.
Kaltsas, G., Lazi, ., Popovi, M., "Analytical
modelling of thermopile based flow sensor and
verification
with
experimental
results",
Microelectronic Engineering, 86(4-6) (2009)
1293-1296.

benefit is observed when residual nSi thickness, dnSi, or


thermocouple width, wTP, is decreasing. These results are
very important for design of structure optimized for flow
sensor application.

ACKNOWLEDGMENT
This work has been partially supported by the Serbian
Ministry of Education, Science and Technological
Development within the framework of the Project TR32008.

References
[1] Meijer, G.C.M., Herwaarden, A.W., Thermal Sensors,
IOP Publishing, Bristol, 1994.
[2] Van Herwaarden, A.W., van Duyn, D.C., van
Oudheusden, B.W., Sarro, P.M., "Integrated
Thermopile Sensors", Sensors and Actuators A, 21-23
(1989) 621-630.
[3] Van Herwaarden, A.W., van Duyn, D.C., Groeneweg,
J., "Small-size vacuum sensors based on silicon
thermopiles", Sensors and Actuators A, 25-27 (1991)
565-569.
[4] Randjelovi, D., Jovanov, V., Lazi, ., Djuri, Z.,
Mati, M., "Vacuum MEMS Sensor Based on
ThermopilesSimple Model and Experimental
Results", Proc. 26th Int. Conf. on Microelectronics
MIEL 2008, 2, Ni, Serbia, (2008) 367-370.
[5] Randjelovi, D., Petropoulos, A., Kaltsas, G.,
Stojanovi, M., Lazi, ., Djuri, Z., Mati, M.,
Multipurpose MEMS Thermal Sensor Based on
Thermopiles, Sensors and Actuators A, 141 (2008)
404-413.
[6] Randjelovi, D.V., Frantlovi, M.P., Miljkovi, B.L.,
Popovi, B.M., Jaki, Z.S., "Intelligent Thermal
Vacuum Sensors Based on Multipurpose Thermopile
MEMS Chips", Vacuum, 101 (2014) 118-124.
[7] Randjelovi, D., Lazi, , Popovi, M., Mati, M.,
"Helium Sensing Using Multipurpose ThermopileBased MEMS devices", Proc. 28th International

366

MONITORING PHYSIOLOGICAL STATUS OF THE SOLDIER DURING


COMBAT MISSION VIA INTEGRATED MEDICAL SENSOR (HEART
RATE, OXYGEN SATURATION) SYSTEM
OLIVER MLADENOVSKI
Military Academy General Mihailo Apostolski, Skopje, Macedonia, don.rome0@yahoo.com
JUGOSLAV ACKOSKI
Military Academy General Mihailo Apostolski, Skopje, Macedonia, jugoslav.ackoski@ugd.edu.mk
MILAN GOCI
University of Nis, Faculty of Civil Engineering and Architecture, Nis, Serbia, mgocic@yahoo.com

Abstract: The physiological status and the condition are very important for the soldiers and for the income from their
activity. Taking a medicine person with you all the time and on the battle field is a risking and takes a time to check all
the soldiers. The main objective of this paper is to create a model of system that will help us to do that. Collected data
from different sensors is transferred to control unit, which is connected to main application. The main application
stores the collected data such as heart rate, oxygen saturation and describes the physiological situation for the checked
soldier. The results from the checking unit are transferred to higher commands through secured networks. The main
application contains of different modules (analyzing, calculating, and counting). Each module represents a different
physiological status.
Keywords: sensor, heart rate, oxygen saturation, control unit.

1. INTRODUCTION

2. SYSTEM ARCHITECTURE

Checking the health condition on every solder at the same


time was always a problem and a huge period of time is
needed to do analysis [1]. After collecting the data,
analyzing the results and further conclusions require lot of
time which could be crucial for the soldier health during
field training especially during battle conditions.

The MPS IMS (Monitoring Physiological Status via


Integrated Medical Sensor) consists of five modules
(Picture 1):
module for collecting data,
module for transferring data,
module for analyzing data (mathematical
formulations),
module for monitoring,
module for live streaming.

Therefore, some of the computer-based physiological


tests, type of software or information system should be
designed [2-6]. In [2], it is said that dynamic, adaptive
procedures, particularly ones based on item-response
theory (IRT) and computerized-adaptive testing (CAT)
methods, will be implemented in new tests that will be
more efficient, reliable, and valid than existing test
procedures. In [7], the wearable physiological monitoring
system called Smart Vest is presented that monitored the
following physiological signals electrocardiogram,
photoplethysmogram, body temperature, blood pressure,
galvanic skin response and heart rate. In [8], the
multiparameter wearable physiologic monitoring system
for space and terrestrial applications is shown.

Main module in MPS IMS is a module for collecting data


which is placed on the finger of the soldier.
Module for transferring data [11] is responsible for
transferring the data from the sensor to the general base
where the medical team is located.
Module for analyzing data is the main module for this
system because in this module the medical team
depending on the received information forms conclusion
and then gives feedback information to the soldiers where
the wounded soldier is.

The main aim of this paper is to present system that


reduced the time for all needed checkup, decreased the
risk of medicine involving into danger areas, accelerates
collecting data, and analyzes data.

Modules for monitoring [9] and live streaming are


modules that will help the medical team to follow
psychological status on different soldiers at the same
time. Monitoring is allowed 24/7 all the time while the
soldier is on the battlefield.

367

MONITORINGPHYSIOLOGICALSTATUSOFTHESOLDIERDURINGCOMBATMISSIONVIAINTEGRATEDMEDICALSENSORSYSTEM

OTEH2016

Picture 1. Architecture of the integrated medical sensor sytem


The system operates at the following manner: First the
sensor gets the variables i.e. data from the soldier than the
data is presented on the raspberry screen, from where via
Python scripts and using LAN cable or Wireless network
are transferred to the general base where the main medical
team is located and monitor the condition of the soldiers.

3. COLLECTING AND TRANSFERRING


DATA
The main part of this system is the module for collecting
data and furthers their transfer. Depending of desired
condition, there are several sensors [10] that are
connected to raspberry pi from where via scripts and WiFi connections are transferred to the General Base where
the monitoring is set up.

2.1. Technical data about the system


In this system the Raspery Pi 2 (1G- ram memory) was
used with the e-Health Sensor Shield V2.0 that allows
Raspberry Pi users to perform biometric and medical
applications where body monitoring is needed by using 10
different sensors: pulse, oxygen in blood (SPO2), airflow
(breathing), body temperature, electrocardiogram (ECG),
glucometer, galvanic skin response (GSR - sweating),
blood pressure (sphygmomanometer), patient position
(accelerometer) and muscle/eletromyography sensor
(EMG).

This system collects the data from sensor for blood pulse
and oxygen saturation (Picture 2) which the soldier carries
with, the data collected from sensor is shown on raspberry
pi screen (Picture 3).

The SPO2 sensor can be described as a measurement of


the amount of oxygen dissolved in blood based on the
detection of Hemoglobin and Deoxyhemoglobin.
The obtained information can be used to monitor in real
time the state of a patient or to get sensitive data in order
to be subsequently analysed for medical diagnosis.
Gathered biometric information can be wirelessly sent
using several available connectivity options such as WiFi, 3G, GPRS, Bluetooth and ZigBee depending on the
application.

Picture 2. Measuring data from sensor for blood pulse


and oxygen saturation

368

MONITORINGPHYSIOLOGICALSTATUSOFTHESOLDIERDURINGCOMBATMISSIONVIAINTEGRATEDMEDICALSENSORSYSTEM

OTEH2016

time (T)
dO = k O
dT

(3)

After solving the equation we get:


O = C e kt

(4)

The final equation that includes the equations (1) and (2)
is:
Picture 3. Data presentation at raspberry pi screen

dP + dO = k ( P + O ) 1 ( dP + dO ) = k ( P + O )
dT dT
dT
dT

Analyzed data is transferred to the medical office in the


main base [10]. If some current condition of the soldiers
gets worst, the system will show attention. The whole
process is going by own, the medical team only receives
information about the health condition and if there are
some problems.

dP ( dP + dO ) = k ( P + O ) dT dP + d )O =
dT
= k ( P + O ) dT dP + dO = k ( P + O ) + C

4. MATHEMATICAL FORMULATION OF
PROBLEM

It should be noted that the analysis compiled from the


sensors and the variables are equal to the data constant (k)
multiplied with the sum of the P and O plus some
constant.

Mathematical formulation is used to explain analysis that


happened in the main base i.e. the changes of the current
parameters before the task and during it. It can make a
connection between collecting data and monitoring and
describes the analyzing module.

5. MODULE FOR MONITORING


Another part of the system is monitoring [9] unit that in
every second checks the health condition of every soldier.
There is an advantage of fast reacting in case of problems
with soldiers. Also, live streaming of the health bar allows
following of the whole system from higher command
level. The whole system is run from special medical team,
located in the General Base. Collected data are transferred
by secured networks.

Heart rate analysis should satisfy the following equation:


dP = k P
dT

(5)

(1)

It explains the changing of the variables of the pulse (P),


for some period of time (T) that we want to check. After
solving the equation we get:

4.1. Oxygen Saturation

Monitoring of the physiological status of the soldier


during combat mission [9] is allowed only in the main
base or on the main server where the collected data are
processed and the results are displayed. Special
permission is needed for monitoring from the medical
team that works there.

The following equation explains the changing of the


variable of the oxygen saturation (dO) for some period of

Diagram in Picture 4 shows the monitoring of the


physiological status of the soldier.

P = C ekt

(2)

where P is the pulse, C is constant, t is time period, K is


data constant.

Picture 4. Monitoring of the physiological status of the soldier during combat mission
On this diagram the medical team will have the full
condition on the soldier. If they include more sensors the

diagram will show more variables. If some variable gets


out of normal value the system will show attention. The
369

MONITORINGPHYSIOLOGICALSTATUSOFTHESOLDIERDURINGCOMBATMISSIONVIAINTEGRATEDMEDICALSENSORSYSTEM

blue line shows the variable of the heart rate of the


soldier, while the red line shows the oxygen saturation.

OTEH2016

R.K., "Construct validity of selected Automated


Neuropsychological Assessment Metrics (ANAM)
battery measures", The Clinical Neuropsychologist,
15 (2001) 498507.
[4] Murnyak, G.R., Leggieri, M.J., Roberts, W.C., "The
risk assessment process used in the Armys Health
Hazard Assessment Program", Acquisition Review
Quarterly, (2003) 200216.
[5] Wittels, P., Johannes, B., Enne, R., Kirsch, K.,
Gunga, H.C., "Voice monitoring to measure
emotional load during short-term stress", European
Journal of Applied Physiology, 87 (2002) 278282.
[6] Bleiberg, J., Kane, R.L., Reeves, D.L., Garmoe,
W.S., Halpern, E., "Factor analysis of computerized
and traditional tests used in mild brain injury
research", The Clinical Neuropsychologist, 14 (2000)
287294.
[7] Pandian, P.S., Mohanavelu, K., Safeer, K.P., Kotresh,
T.M., Shakunthala, D.T., Gopal, P., Padaki, V.C.,
"Smart Vest: Wearable multi-parameter remote
physiological
monitoring
system",
Medical
Engineering & Physics, 30 (2008) 466477.
[8] Mundt, C.W., Montgomery, K.N., Udoh, U.E.,
Barker, V.N., Thonier, G.C., Tellier, A.M., et al., "A
multiparameter wearable physiologic monitoring
system for space and terrestrial applications", IEEE
Trans Inform Tech Biomed, 9(2005) 382391.
[9] Zainee, N. M., Chellappan, K., "Emergency clinic
multi-sensor continuous monitoring prototype using
e-Health platform", Biomedical Engineering and
Sciences (IECBES), 2014 IEEE Conference on.
IEEE, 2014.
[10] Khelil, A., et al. "Digiaid: A wearable health platform
for
automated
self-tagging
in
emergency
cases", Wireless Mobile Communication and
Healthcare (Mobihealth), 2014 EAI 4th International
Conference on. IEEE, 2014.
[11] Jassas, M. S., Qasem, A., A., Mahmoud, Q. A., "A
smart system connecting e-health sensors and the
cloud", Electrical and Computer Engineering
(CCECE), 2015 IEEE 28th Canadian Conference on.
IEEE, 2015.

Acceptable normal ranges for patients are from 95 to 99


percent. Those with a hypoxic drive problem would
expect values to be between 88 and 94 percent. Values of
100 percent can indicate carbon monoxide poisoning.

6. FUTURE IMPROVEMENTS
In the future the system can be improved involving more
sensors such as sensors for body position, temperature,
and heart rate. The best improvement for this system is
that you can make an application which can contain this
system information, so the medical team can have a full
monitoring 24/7 where they should go. They can
download the application on Android and can have it on
their mobile phone if there is a problem with the main
server or the server crushed. Also, data can be sent to the
Cloud in order to perform permanent storage or
visualization in a real time by sending the data directly to
a laptop or Smartphone.

7. CONCLUSION
The presented MPS IMS is a special type of information
system which purpose is to monitor the health of the
soldiers during battle conditions. The system allows
shortening the time of collecting data, analyzing and
receiving feedback information from the medical team.
This system will help both the soldiers on the field and
the medical teams, especially it will be from huge help to
the soldiers that could not receive immediate medical aid.

References
[1] Friedl, K.E, Grate, S.J., Proctor, S.P., Ness, J.W.,
Lukey, B.J., Kane, R.L., "Army research needs for
automated neuropsychological tests: Monitoring
soldier health and performance status", Archives of
Clinical Neuropsychology, 22S (2007) S7S14.
[2] Letz, R., "Continuing challenges for computer-based
neuropsychological tests", Neurotoxicology, 24
(2003) 479489.
[3] Kabat, M.H., Kane, R.L., Jefferson, A.L., Di Pino,

370

ALUMINIUM TILES DEFECTS DETECTION BY EMPLOYING PULSED


THERMOGRAPHY METHOD WITH DIFFERENT THERMAL CAMERAS
LJUBIA TOMI
Military Technical Institute, Belgrade, ljubisa.tomic@gmail.com
VESNA DAMNJANOVI
Faculty of Mining and Geology, University of Belgrade, vesna.damnjanovic@rgf.bg.ac.rs
GORAN DIKI
Military Academy, University of Defence in Belgrade, goran.dikic@mod.gov.rs
BOBAN BONDULI
Military Academy, University of Defence in Belgrade, bondzulici@yahoo.com
BOJAN MILANOVI
Military Academy, University of Defence in Belgrade, bojan.milanovic@va.mod.gov.rs
RADE PAVLOVI
Military Technical Institute, Belgrade, rade_pav@yahoo.com

Abstract: The results of nondestructive testing of aluminum test plates by pulsed thermography are presented in the
paper. Rectangular defects of different widths and depths are simulated in plan-parallel plates. The test plates are
recorded by two different thermal cameras. The experimental conditions during the two recording sessions remain
unchanged. The thermal images are compared and quantitatively analyzed. The parameter used for comparison in the
quantitative analysis is the maximum temperature difference between the defective and defect-free areas, at different
frame rates. The advantages and shortfalls of using different thermal cameras, and the benefits of external triggering
and automated synchronization of the light sources with the beginning of recording, are assessed.
Keywords: pulsed thermography, nondestructive testing, defects in material, temperature contrast.
working hours of a cooled thermal camera are limited
because of the mechanical life cycle of the cooling
system, but such thermal cameras are more accurate.
However, uncooled thermal cameras are used more often
because of their simpler design and low cost.

1. INTRODUCTION
Active infrared thermography (IRT) is a nondestructive
method applied to test and evaluate defects in material
(NDT&E Nondestructive Testing and Evaluation) [1].
The paper describes a nondestructive method for
detecting defects in material, which is classified under
active IRT and is usually called pulsed thermography
(PT) because the heat source is a pulsed energy source.
The method is based on the use of a thermal camera for
visualizing defects in material (e.g. in military
applications), which also enables two-dimensional
measurement of surface temperature distribution.

The PT technique applied in the present research is based


on two high-resolution thermal cameras: FLIR X6540sc
with a cooled detector and a high frame rate, and FLIR
SC 620 with an uncooled detector, which also operated at
a high frame rate of 120 Hz. The assessment of the
thermal images generated immediately before and after
the sample was illuminated during a recording sequence is
based on monitoring of the pixel temperature variation on
the surface of the sample, frame by frame [5-6].

2. INTRODUCTION TO THERMAL
IMAGING SYSTEMS

The thermal images generated in this type of testing are


not ideal. The most frequent causes of thermal image
degradation are radiometric distortion, geometric
distortion and noise [1,2,3]. In PT, extraction of useful
signals is constrained by external factors (camera
sensitivity, light triggering energy, reflected background
radiation, and noise), but also thermal diffusion as an
internal factor. Thermal images can be visually enhanced
during data analysis, applying one of the standard 3D
filtering methods [7, 8].

Thermal imaging is scanning of the electromagnetic


spectrum in the IR range; in other words, a thermal image
is a scan of the radiated heat [1-4].
Thermal cameras, depending on the type of detector, are
differentiated as cameras with cooled or uncooled
detectors. Apart from the detector, the most complex
component of the camera is the cooling system. The
371

ALUMINIUMTILESDEFECTSDETECTIONBYEMPLOYINGPULSEDTHERMOGRAPHYMETHODWITHDIFFERENTTHERMALCAMERAS

OTEH2016

(VOx). Table 1 shows the specifications of the camera.


The camera is also equipped with the accessories needed
for direct connection to a computer, which supports
continuous scanning of the object.

3. PULSED THERMOGRAPHY
EXPERIMENTAL SETUP
The PT experimental setup is comprised of a heat source,
thermal camera and computer for real-time data storage.
Picture 1 shows the PT setup in the reflective mode, with
one pulsing light source (3) placed in front of the test
plate (1), and thermal camera (2).

Picture 2. FLIR SC 620 thermal camera with uncooled


microbolometer
The thermal images are stored on an SD memory card and
transferred to the computer for analysis using FLIRQuick
Report or ThermaCAM Researcher Professional software
[12, 13].
Table 1. FLIR SC620 thermal camera specifications

Picture 1. Pulsed thermography experimental setup:


1 aluminum test plate; 2 thermal camera FLIR
X6540sc; and 3 light source

Camera specific
FOV-Field of View

The light source was a photographic flash, YASHICA


CS-250AF. The sample was heated by positioning the
flash at a distance that enabled uniform heating of the
surface. The optimal position of the flash was about 5 cm
from the normal surface that was heated, but it could also
be moved along the vertical. The light source positioned
in this manner enabled generation of a sufficiently strong
heat flux to the surface of the test plate (TP), which was
illuminated on the defect-free side. The sample was
heated evenly by diffusion, except in places where defects
were encountered [2, 3, 9-11].

Detector

The properties of TP and the environment dictate specific


requirements for thermal imaging in NDT. The selection
of the spectral range of the thermal camera essentially
depends on the spectral emissivity of the material,
radiation, temperature contrast, atmospheric transmission,
type of IR detector, and parasitic radiation of the
environment. The effect of these parameters is not the
same, e.g. in the case of a thermal camera, the impact of
air humidity on an mid-wave infrared (MWIR) camera is
smaller but, on the other hand, the impact of the optical
and electronic noise on that camera is smaller than on
long-wave infrared (LWIR). The emissivity coefficient is
a parameter that depends on the wavelength. Namely,
when the emissivity coefficient is high, the effect of the
parasitic reflection of the environment is smaller and
defect detection better.

Command & Control

Detector Pitch
NETD
Spectral range
Max frame rate
Accuracy
Minimum focus
Spatial resolution
Operating
temperature

Image storage
Video output
Functions
Weight

FLIR SC620
24 horizontal and 18 vertical
640 x 480 pixels, uncooled,
microbolometer
17 m
<40 mK
7.5 m 13 m
120 Hz (640 x 120)
2C or 2% of Reading
0.3 m
0.65 mrad for 24 lens
-10 to 50 C
FireWire, USB, Bluetooth,
composite video
> 1000 images on built-in
memory card
NTSC/PAL
Focus, recording, storage,
repeated invitation
1.8 kg, incl. lens and battery

The thermal camera could be programmed to capture


images at specified time intervals and during the course of
recording the thermal images could show the minimum,
maximum and average temperatures.
Then the second thermal camera FLIR X6540sc
(Picture 3) was used; it has a cooled indium antimonide
detector operating in the spectral range from 3 to 5 m.
The specifications of that camera are shown in Table 2.

The surface cooling process, or the creation of a set of


thermal images of the aluminum TP with simulated
defects, was first monitored by the FLIR SC620 camera
(Picture 2), equipped with an uncooled microbolometer
and standard 2418 optics for a wavelength range from
7.5 to 13 m. This camera has a focal matrix in the focal
plane array with a 640480 semiconductor detector

The thermal images were captured in .ptw format on the


PC for processing and analysis using ALTAIR software.
The images could be converted from .ptw to support
image processing by FLIR ResearchIR Self Viewing File
Version 4.00.0.39b, which was used in the present
research.
372

ALUMINIUMTILESDEFECTSDETECTIONBYEMPLOYINGPULSEDTHERMOGRAPHYMETHODWITHDIFFERENTTHERMALCAMERAS

OTEH2016

To reduce reflection, the surfaces of the TP were painted


matte black. After painting, the size of the defects in the
TP were measured by an IP40 ORION micrometer
(accuracy 0.001 mm): nominal w and measured width wm;
nominal D and measured thickness Dm; and nominal d
and measured defect depth dm. Tables 3 and 4 show the
measured TP defect widths and depths [5]

Picture 3. FLIR X6540sc thermal camera with cooled


indium antimonide detector
Table 2. FLIR X6540sc thermal camera specifications
Camera specific
FOV-Field of View
Detector
Detector Pitch
NETD
Spectral range

FLIR X6540sc
11 horizontal and 8.8 vertical
640 x 512 pixels, cooled, InSb
15 m
<25 mK (standard 20 mK)
3 m 5 m
355 Hz (Full Frame Mode)
Max frame rate
4500 Hz with (320 x 8 Pixels)
Integration time
160 ns to 20ms
Accuracy
1C or 1% of Reading
Operating temperature
5 to 300C
Gigabit Ethernet, Camera Link,
Command & Control
Detachable LCD Display, WiFi
>1000 images on built-in
Image storage
memory card
NTSC/PAL, composite or S Video output
video format
Focus, recording, storage,
Functions
repeated invitation
Weight
5.05 kg with lens

Picture 4. Aluminum test plate: a) side with simulated


defects; and b) side heated by light flux, on which surface
temperature variation was monitored [2, 5-12]

Picture 5 Sketch of aluminum test plate (thermal image


temperatures were read-out along the green center line
which is normal to the grooves) [5]

The main thermal-image acquisition parameters included:


temperature, object emissivity and reflectivity, air
temperature, relative air humidity, and distance of the
object from the thermal camera. The experiment whose
results are reported in this paper was conducted at small
distances, such that this parameter had no significant
effect. The thermal camera estimated the object emissivity
and temperature.

Table 3. Nominal and measured defect dimensions [5]


W
mm
4
3
2.5
2
1.5
1

TP
1
2
3
4
5
6

4. ALUMINUM TEST PLATES WITH


SIMULATED DEFECTS

wm
mm
3.962
2.939
2.471
1.952
1.446
0.978

Dm
mm
1.953
1.952
1.949
1.949
1.957
1.967

Machining of the grooves with a rectangular bottom


resulted in a slight deviation of the actual from the
nominal depth. The varying defect depths were a potential
source of noise in the thermal images.

The size of the aluminum TPs was 50 mm 30 mm


2 mm and the simulated defects were grooves (cavities)
of the same length but different widths and depths (15
mm 4 mm 0.5 mm; 15 mm 4 mm 1.0 mm and 15
mm 4 mm 1.5 mm).

Table 4. Nominal and measured depths of three defects


whose widths differed
TP

Picture 4 shows both sides of TP #13: a) the side of the


plate with simulated defects, and b) the side of the plate
heated by a light flux, on which surface temperature
variation was monitored.

13
7
1

Picture 5 is a sketch of the TP. The center line along


which temperatures were read-out from the thermal image
is the green line normal to the grooves.

D
mm
0.5
1.0
1.5

Dm
mm
0.443
1.12
1.445

For comparison purposes, Table 4 shows both nominal


and measures depths of the individual defects whose
nominal defect width was w=4 mm. Given that the
373

ALUMINIUMTILESDEFECTSDETECTIONBYEMPLOYINGPULSEDTHERMOGRAPHYMETHODWITHDIFFERENTTHERMALCAMERAS

measured depths were not the real depths, it was


necessary to average out the measured depths of all the
defects, dm (for all six in Table 3).

OTEH2016

Picture 8 is a thermal image of TP #13 at ambient


temperature (i.e. of cold TP #13). The temperature scale is
the same as in Picture 7 [2, 12] and the 55th frame of
sequence SEQ_0192 is shown.

5. RESULTS AND DISCUSSION


When a thermal camera is used in PT, it is extremely
important to set up the experiment and interpret the
captured thermal images properly. The frame-by-frame
thermal images of a recorded sequence were monitored on
a laptop and it was of key importance to know the spatial
and temperature resolution (sensitivity) of the thermal
camera to correctly select the measurement geometry and
interpret the results.
Picture 6 is the thermal image of the 56th frame, of a total of
181 in the sequence SEQ_0192, showing post-heating
periodic defects of TP #13. The AR01 rectangle marks the
pixel area from which data were collected for statistical
analysis, to arrive at a temperature profile along the purple
line normal to the direction of the defects. The purple line on
the thermal image was used to determine the exact curves of
temperature dependence of contrast, or determine the
correlation of the pixel temperature difference between the
middle of the defective area and the area between the defects
(in Picture 5 the line was green).

Picture 8. Thermal image of a cold test plate immediately


prior to heating
The thermal image revealed a difference of 0.3C
between the average temperature of area AR01 (17.4C)
and the highest temperature along LI01 of 17.7C, which
was a result of thermal image noise and external parasitic
reflection of warm objects in the environment (the noise
is more apparent in the bottom right corner because of the
flash effect). Picture 9a shows the thermal image of the
588th frame of sequence SEQRec-018.ptw, of a total of
3047, which was the 5th after heating of the same TP #13,
captured by the thermal camera with a cooled detector.

Picture 6. Thermal image of the 56th frame of the


recorded sequence SEQ_0192 of test plate #13 [2, 12]
Area AR01 on the thermal image lies within the following
coordinates: from pixel Y1=204 to pixel Y2=244, including
coordinate boundaries along the vertical; and from pixel
X1=291 to pixel X2=434, including coordinate boundaries
along the horizontal. The average temperature of area AR01
was 19.3C. The temperature scale is shown to the right of
the thermal image. The temperature range was 4.2C. The
highest temperature was 19.9C and the lowest 18.7C.
Picture 7 shows a signal-to-noise temperature profile of four
defects along the center line 33.33 ms after the TP surface
was heated with a photographic flash in the 56th frame.
20.1
o

T = 0.9( C)

19.6

19.1

18.6
0

20

40

60

80

100

120

140

Picture 7. Temperature profile of a defect along the


center line in the 56th frame of sequence SEQ_0192
374

Picture 9a. image 12.5 ms after heating (588th frame of


sequence SEQRec-018.ptw); positions of the markers of
Line 1 on the surface of the test plate: X1=84, X2=240, and
Y1=Y2=119
Picture 9b shows the 583th frame of the same sequence, a
thermal image of the cold plate immediately prior to heating.
Box 1 marks the pixel area from which data were acquired
for statistical analysis, to determine the temperature profile
along the line normal to the direction of the defects. The
markers of Box 1 were within the following coordinates:
from pixel Y1=93 to pixel Y2=145 (including coordinate
boundaries), along the vertical, and from pixel X1=84 to
pixel X2=240 (including coordinate boundaries), along the
horizontal. The average temperature of this area was 25.5C
and the temperature range 9.3.
The current average TP temperature of the pixels on the
marker of Line 1 of the above frame was 25.3C. The
average temperature above the defect was 25.9C and that
of the area between the defects 24.8C. The average
surface temperatures of Box 1 and Line 1 of cold TP #13
were the same 24.6C.
Picture 9b shows the thermal image of the cold TP. The
temperature in the bottom right corner of Box 1 was
higher, 25.0oC, because of parasitic reflection.

ALUMINIUMTILESDEFECTSDETECTIONBYEMPLOYINGPULSEDTHERMOGRAPHYMETHODWITHDIFFERENTTHERMALCAMERAS

OTEH2016

6. CONCLUSION
The results of the experiment showed that pulsed flash
thermography can be applied to detect defects and nonhomogeneities in aluminum samples. To improve the
efficiency of the method, the surface of the material
should be painted matte black. The test plates were evenly
heated by energy fluxes from the light source. The
recording sequences of two different thermal cameras
always began at the same time, which was essential for
subsequent frame-by-frame analysis.

Picture 9b. Thermal image of a cold test plate


immediately prior to heating by a light flux (583th frame
of sequence SEQRec-018.ptw). Temperature scale 23.8C
to 33.1C

The paper presented only some of the results obtained at a


frame rate of 400 Hz, a single-frame generation time of
2.5 ms and in integration time of 1.665 ms. The
temperature profile of a marked area actually represented
the mean value of that area (along the vertical), generated
directly by ALTAIR software. The 583th frame was the
thermal image of a cold plate, the 584th frame showed that
the detector was saturated and the temperature higher than
50C, and the next, 585th, frame indicated that the pixels
were slowly becoming unsaturated (the contours of the
defects were not discernible). The defects were visible on
the 586th frame, 7.5 ms after triggering of the flash. The
587th frame was the first in which the contours of the
defects were clearly identifiable.

Picture 10 shows the temperature profile of four defects


(w=4 mm wide) in TP #13, based on the thermal images
captured by the FLIR X6540sc thermal camera with a
cooled detector.

T = 0.6( C)

Picture 10. Temperature profile along the line of defects


from the 588th frame of sequence SEQRec-018.ptw
The exact temperature contrast was difficult to determine
from the pixel temperatures in the middle of the defective
and defect-free areas and raw, noisy images. It was better
to take a certain number of pixels from Box 1 and find the
mean value along the vertical of Box 1.
Picture 11 shows the temperature profile of the same four
defects as in Picture 10, but after temperature averaging
along the vertical of Box 1. This was supported by FLIR
ResearchIR Self Viewing File Version 4.00.0.39b, as the
thermal image was converted from .ptw to the format of
that software to simplify processing and analysis.
o

T = 0.58( C)

Picture 11. Temperature profile along the line of defects


from the 588th frame of sequence SEQRec-018.ptw
375

The thermal image of the 588th frame was also shown,


along with the temperature profile of the center line and
the averaged profile of the marked area. The defects were
also visible in the 610th frame, after 42.5 ms. Noise due to
external radiation was more pronounced in the 3-5
micrometer range, but this could easily be filtered out by
subtracting the thermal image of the cold test plate from
that of the heated test plate (frame by frame); in this way
it was possible to distinguish defects in the test plates,
which had not been visible until then. In the reported
research, the efficiency of defect detection in 1.949 mm to
1.967 mm thick aluminum test plates was the highest in
the case of the shallowest defects (1.952 mm wide).
Today, extensive application of thermal cameras in many
areas of research leads to ongoing optimization of the
various methods, including pulsed thermography.
Although thermal cameras with cooled detectors are
costlier than those with uncooled detectors, they offer
numerous advantages in pulsed thermography. A thermal
camera with a cooled detector can register temperature
differences over a longer time interval, the quality of the
thermal images is better, and these cameras can be used to
record rapid dynamic processes, such as the appearance
and disappearance of post-heating defect contours in the
detection of defects in materials. The commercial thermal
camera FLIR SC620 without a cooled detector did not
support external triggering and prevented synchronization
of start frame with triggering of the light source. This
creates problems when such a camera is used in pulsed
thermography for nondestructive testing of material.
Another major advantage of thermal cameras with cooled
detectors is easy spectral filtering, which facilitates the
detection of details in the images. They also facilitate the
visualization of thermal phenomena in both parts of the
electromagnetic spectrum and support external triggering
and synchronization of recording with the pulsed

ALUMINIUMTILESDEFECTSDETECTIONBYEMPLOYINGPULSEDTHERMOGRAPHYMETHODWITHDIFFERENTTHERMALCAMERAS

radiation source. Cameras with cooled detectors usually


have a superior magnification capability because the
lenses with multiple optical elements do not interfere with
the signal-to-noise ratio, respond to shorter infrared
wavelengths and feature higher sensitivity of the order
of 20 mK (as opposed to 40 mK of thermal cameras with
uncooled detectors). However, the main objective of the
present research was to answer the question: When is a
much cheaper thermal camera with an uncooled detector
preferable in pulsed thermography? It is possible to
determine the temperature contrast from thermal images
when the quality of the images is high and the number of
generated thermal images large, because the monitored
processes are rapid.

OTEH2016

4, 7-10 June, 2010.


[6] Tomi,Lj., Elazar,J., Damnjanovi,V., Milanovi,B.,
Kovaevi,A.: Quantity testing of the defects in
aluminum plates using infrared thermography,
Proceedings of 57th ETRAN Conference, Zlatibor,
Serbia, pp. MO1.5.1-5, 3-6 June, 2013.
[7] Tomi,Lj., Damnjanovi,V., Diki,G., Milanovi,B.,
Bonduli,B.: Quantitative testing of the defects in
aluminum plates using pulsed thermography,
Proceedings of 6th International Scientific
Conference OTEH 2014, Belgrade, Serbia, pp. 705709, 9-10 October, 2014.
[8] Kosti,I., Tomi,Lj., Kovaevi,A., Nikoli,S.:
Detecting defects in material by analysis of recorded
thermograms, Proceedings of 55th ETRAN
Conference, Banja Vruica, Serbia, pp. MO4.5-1-4,
11-14 June, 2012.
[9] Tomi,Lj., Elazar,J., Milanovi,B., Kovaevi,A.:
Temperature contrast enhancement techniques in
pulse video thermography, Proceedings of 5th
International Scientific Conference OTEH 2012,
Belgrade, Serbia, pp. 427-431, 18-19 September,
2012.
[10] Tomi,Lj., Elazar,J., Milanovi,B.: Numerical
simulation of the temperature field in active infrared
thermography, Proceedings of 55th ETRAN
Conference, Banja Vruica, Republic of Srpska, pp.
MO1.4-1-4, 6-9 June, 2011.
[11] Tomi,Lj., Elazar,J.: Optimization of depth and
spatial thermogram resolution in pulsed video
defectoscopy, Proceedings of 56th ETRAN
Conference, Zlatibor, Serbia, pp. MO4.4-1-4, 11-14
June, 2012.
[12] Tomi,Lj., Elazar,J., Milanovi,B.: Characterization
of subsurface defects by radiometric nondestructive
defectoscopy, Proceedings of 4th International
Scientific Conference OTEH 2011, Belgrade, Serbia,
pp. 587-591, 6-7 October, 2011.
[13] Keki,G., Sirovatka,R.: Characteristic of FLIR SC
620 Infrared Camera, Proceedings of 4th
International Scientific Conference OTEH 2011,
Belgrade, Serbia, pp. 411-416, 6-7 October, 2011.

ACKNOWLEDGEMENT
Results presented in this paper are made within project
No 47029 financed by the Ministry of Science and
Technology development, the Republic of Serbia.

REFERENCES
[1] Maldague,X.P.V.: Theory and Practice of Infrared
Technology for Nondestructive Testing, John Wiley
& Sons, New York, USA, 2001.
[2] Tomi,Lj., Livada,B., Senani,M.: Subsurface
defects detection in aluminum using pulse
radiometry, Proceedings of 45th ETRAN
Conference, Bukovika Banja, Serbia, pp. 262-265,
4-7 June, 2001.
[3] Tomi,D.Lj.: Nondestructive evaluation of the
thermophysics properties materials by IR
thermography, Ph.D. dissertation, School of
Electrical Engineering, University of Belgrade,
Belgrade, Serbia, 2012.
[4] Tomi,D.Lj., Elazar,M.J.: Pulse thermography
experimental data processing by numerically
simulating thermal processes in a sample with
periodical structure of defects, NDT&E Int., 60
(2013) 132-135.
[5] Tomi,Lj., Elazar,J., Milanovi,B.: Temperature field
numerical simulation in pulse radiometric
defectoscopy, Proceedings of 54th ETRAN
Conference, Donji Milanovac, Serbia, pp. MO1.5-1-

376

CHANNEL SELECTOR FOR OPTIMIZATION OF TEST AND


CALLIBRATION PROCEDURES OF ICTM PRESSURE SENSORS
PREDRAG POLJAK
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
predrag.poljak@nanosys.ihtm.bg.ac.rs
MILO VORKAPI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
worcky@nanosys.ihtm.bg.ac.rs
DANIJELA RANDJELOVI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
danijela@nanosys.ihtm.bg.ac.rs

Abstract: This paper presents channel selector developed for improvement of the efficiency of the test and callibration
procedure of ICTM pressure sensors. So far, this procedure allowed testing of only one sensor at the time. The channel
selector enables simultaneous testing of up to five pressure sensors. This implies many benefits, such as: reduction of
power consumption, reduction of the amount of gas used in the measurement system, significant saving of time needed
for testing of a small series of pressure transmitters. Realization of the channel selector is based on use of a reed relay
which is driven by specially designed electronics. There are two modes of operation of this device. The first one is by
manual channel selection, while the second one is based on PC control realized by using dedicated software and
microcontroller which are incorporated in the device itself and connected via USB communication. During the test and
callibration procedure pressure is generated by MENSOR APC-600 pressure calibrator, while temperature is
controlled using temperature test chamber HERAEUS VTSCH VMT 08/140.
Keywords: pressure sensor, callibration, channel selector, reed relay, USB communication.

1. INTRODUCTION
One of the main research and development areas at the
Centre of Microelectronic Technologies, which belongs to
the Institute of Chemistry, Technology and Metallurgy
(ICTM) is the field of pressure sensors. More than 25 years
ago, ICTM pressure sensors [1] have reached the
commercial level. The latest generation of ICTM pressure
sensors is SP-12 which demonstrated exceptional sensor
linearity to the burst pressure [2]. These commercial devices
require reliable and efficient test and callibration procedures.
In the last years, much efforts have been put on improvement
of these procedures [3, 4]. This paper presents new procedure based on channel selector. Such solution contributes
significantly to the improvement of the efficiency of the test
and callibration procedure of ICTM pressure sensors.
Currently, there are two types of commercial channel
selectors available on the market: the first ones developed
by renomated companies like National Instruments, which
are very expensive and offer numerous options [5], while
the second ones are produced by small companies with
questonable quality, reliability and warranty [6].

Limited number of channels,


Software limitations,
Lack of flexibility,
Questonable quality, reliability and warranty,
Problems with delivery.

Motivated by all these facts, we developed a flexible


channel selector that suites our needs, which can operate
fully automatically controlled by PC, or manually
controlled by operator. In our case, the channel selector
performs selection of the pressure sensors.
The first drawback to be eliminated was the limited
number of channels. In the current measurement set-up
optimal number of channels is five. On the other hand, we
needed simultaneous connection of the four lines from
each sensor: two are connecting sensors's input with
multimeter and the second pair is connecting sensor's
output with the another multimeter. Further, we needed
options for programmable or manual switch for sensor
selection.

2. TECHNICAL REQUIREMENTS

While the first type of selectors would fulfill our technical


requirements, the main drawback is the price and the
existence of too many unnecessary options.

The first stage in development of sensor selector was the


definition of the technical requirements. The device
should enable sequential measurements of several sensors
under the same conditions. The same measurement

The main drawbacks of the second type of commercial


selectors are:
377

CHANNELSELECTORFOROPTIMIZATIONOFTESTANDCALLIBRATIONPROCEDURESOFICTMPRESSURESENSORS

conditions comprise placement of the sensors in the same


test chamber at the same pressure line. Test chamber
should provide controlled temperature, while the pressure
line is connected with the Mensor APC 600 pressure
callibrator which provides pre-defined pressure values at
which sensors are tested.

Two programmes are required for such functioning of the


device. The first one is written for the microcontroller and
it enables independence of the device and combination of
the channels overlapping as desired. The second
programme is the user interface installed on PC. This
programme can be installed as independent software
which enables overlapping according to the needs or it
can run as a subroutine within the more complex
programme which runs the system containing MPX-1.

These sequential measurements should have two options:


1) selector is contolled by an external PC or 2) selector is
controlled by an operator who can, independently of the
external software, choose the sensor to be tested. Another
very important aspect taken into account during the
device design was compatibility with the already existing
connectors in the lab. Measurement setup established
earlier in our lab for test and callibration procedure of
single sensor used certain types of connectors and
compatible solution is used when designing the selector.

KEITHLEY
224

Taking into account all previously mentioned


requirements we have designed and fabricated the device.
We have considered possibility of incorporating digital
switches consisting of electrical circuits or reed relays.
After thoroughly analysing both options, we decided to
use reed relays. Ten integrated circuits, each containing 2
reed relays, are needed for our configuration. Choice of
integrated circuit is performed by digital outputs of the
microcontroller connected via ULN2003 integrated circuit
consisting of transistors in Darlington pair configuration.

Picture 2. System for test and callibration of the sensors where


the channel selector MPX-1 is implemented as a sensor
selector: 1 - Sensor (channel) selector MPX-1; 2 Temperature test chamber HERAEUS VTSCH VMT 08/140.
Measurement temperature is set and maintained. Temperature
value is obtained from the temperature sensor placed inside the
chamber; 3 - Pressure sensor; 4 - pneumatic manifold with 5
connectors, in each connector a sensor to be tested is
connected; 5 - Pressure calibrator Mensor APC600; 6 Multimeter Agilent 34410a measures voltage value, Ud, at the
input of the sensor; 7 - Multimeter Agilent 34461a measures
voltage value, U, at the output of the sensor; 8 - PC with the
installed software which controlls and runs the; 9 - Gas
container under pressure; 10 Current source Keithley 224

Block diagram of the channel selector MPX-1 is


presented in Picture 1. The main elements of MPX-1
device are shown. The selector consists of two digital
modules and an independent switch for manual selection.
power supply
MANUAL
SELECTION
SWITCH

MCU MODUL
digital outputs

S3 out

S4 out

S5 out

S3 in

S4 in

S5 in

System for test and callibration of the sensors based on


the channel selector MPX-1 is presented in Picture 2.
Brief description of each part of the system is also given.

SELECTOR UNIT

S2 out
S2 in

S1 out

REED RELAY MODUL

S1 in

10

3. DESCRIPTION

usb

OTEH2016

Picture 1. Channel selector MPX-1


Digital module incorporated in the microcontroller
enables PC connection providing at the same time
automatization of the process of test and callibration of
the sensors as illustrated in Picture 2. The selector device
can operate either in manual mode where the operator is
choosing the sensor to be tested or in automatic mode
where specialized software installed on PC which controls
the measurement system is performing the tas of
choosing the sensor.

Picture 3. Front panel of the sensor selector MPX-1


378

CHANNELSELECTORFOROPTIMIZATIONOFTESTANDCALLIBRATIONPROCEDURESOFICTMPRESSURESENSORS

OTEH2016

Connectors are placed both on the front and rear panels of


MPX-1 selector. Connectors on the front panel enable
connection of the collector, therefore of the sensor also,
with the measurement instruments as shown in Picture 3.

Table 1 gives an example of the measurement results


obtained for one pressure sensor tested at 11 measurement
points (11 pressure values) for the fixed temperature of
20,8C inside the test chamber.

The connectors placed on the front panel are labeled as


following:
+IN and IN serve for connection with the constant
current source (2.5mA)
+D and D for connection with the digital multimeter
which reads voltage value at the input of the sensor, Ud
+OUT and OUT are connected with the digital
multimeter which reads the voltage value at the output of
the sensor, U

Table 1. Measurement results obtained for the pressure


sensor SP-12 30m during one measurement sequence..
Measurement
No. 1

T[C]

20,8

Ud[mV]

6192

pa[bar]

U[mV]

U[mV]

1,1

41,38

41,385

2,1

70,14

70,14

On the rear panel, connectors of Binder 680 type enable


connection of the device with the tested sensors as shown
in Picture 4. Besides that, an additional Binder 680
connector is placed on the rear panel, which serves for
connection with the acquisition device IHTM SMART.

3,1

98,84

98,84

4,1

127,47

127,475

5,1

156,02

156,025

6,1

184,49

184,49

On the rear panel, there is also a connector for additional


5 V supply, which is needed when an operater controls
the measurements. On the other hand, this additional
voltage supply is not needed if the device is run by PC. In
that case, voltage is provided by USB port.

7,1

212,86

212,86

8,1

241,13

241,13

9,1

269,28

269,28

10,1

297,31

297,31

11,1

325,21

325,21

4. CONCLUSION
The channel selector MPX-1 presented in this paper was
developed in order to overcome numerous drawbacks
existing in the earlier procedure for test and callibration of
ICTM pressure sensors. Previously used procedure
allowed measurement of only one sensor at a time,
requiring repeated connection and disconnection of the
electrical wire connectors during the exchange of the
tested sensors. This results in damagining the connectors
and possible breakage of the the wire connectors.
Picture 4. Rear panel of the sensor selector MPX-1.

The channel selector MPX-1 enables simultaneous testing


of up to five pressure sensors, therefore it acts as a sensor
selector. MPX-1 provides connection of each tested
sensor individually with the instruments which serve for
voltage supply and for the measurement of the sensors
output voltage. Implementation of the sensor selector
MPX-1 eliminates problems described above because the
connection is realized through microswitching
components instead of attaching-detaching of the external
connector elements.

Picture 5 shows photograph of the measurement


configuration with five pressure sensors. The sensors are
placed inside the test chamber at fixed temperature and all
of them are connected to the same pressure line.

Implementation of the sensor selector implies many


benefits, such as: reduction of power consumption,
reduction of the amount of gas used in the measurement
system and significant saving of time needed for testing
of a small series of pressure transmitters.

ACKNOWLEDGMENT
This work has been partially supported by the Serbian
Ministry of Education, Science and Technological
Development within the framework of the Projects
TR32008 and TR32019.

Picture 5. Photograph of five pressure sensors placed


inside the test chamber during the measurements.
379

CHANNELSELECTORFOROPTIMIZATIONOFTESTANDCALLIBRATIONPROCEDURESOFICTMPRESSURESENSORS

OTEH2016

Poljak,P., Mini,S.: Model za unapredjenje ispitivanja


transdjusera u maloserijskoj proizvodnji, Tehnika
Kvalitet, standardizacija i metrologija, 11(6) (2011)
1043-1047. (in Serbian)
[4] Poljak,P., Vukeli,B., Vorkapi,M., Randjelovi,D.V.:
Prototype of the Multichannel Acquisition System
Developed for ICTM Pressure Transmitters,
Electronics, 19(2) (2015) 66-69.
[5] PCI General Purpose Relay Cards
http://www.pickeringtest.com/en-rs/products/
pci/switch-cards /general-purpose-relay
[6] JSB2434 USB DPST four (4) Reed Relays Module
http://www.j-works.com/jsb2434.php

References
[1] Djuri,Z., Matovi,J., Mati,M., Miovi,S.N.,
Petrovi,R., Smiljani,M., Lazi,.: Pressure Sensor
with Silicon Diaphragm, Proc. XIV Yugoslav
Conference on Microelectronics MIEL, Belgrade
(1986) 88100.
[2] Mati,M., Lazi,., Radulovi,K., Smiljani,M.M.,
Ralji,M.: Eksperimentalno odreivanje optimalne
linearnosti senzora pritiska, Proc. 57th Conference
for Electronics, Telecommunications, Computers,
Automation and Nuclear Engineering ETRAN,
Zlatibor, vol. MO3.1. (2013) (in Serbian)
[3] Vorkapi,M., Starevi,M., Popovi,B., Frantlovi,M.,

380

SECURITY SYSTEM IN MILITARY BASES WITH


MATLAB ALGORITHM
TAMARA GJONEDVA
Military Academy General Mihailo Apostolski, Skopje, tgjondeva@yahoo.com
SOFIJA VELINOVSKA
Military Academy General Mihailo Apostolski, Skopje, sofija.velinovska@gmail.com
JUGOSLAV ACHKOSKI
Military Academy General Mihailo Apostolski, Skopje, jugoslav.ackoski@ugd.edu.mk
BOBAN TEMELKOVSKI
Military Academy General Mihailo Apostolski, Skopje, temelkovskiboban@yahoo.com

Abstract: Achieving high level of security and safety considering the newest threats is difficult especially where security
access breaches are present. The purpose of this paper is to present a prototype of system for controlling entry at the
military base. Achieving fast and controlled access system in military bases is the main objective of this paper. The system
consists of several subsystems, which refer to the input data, the algorithm of parallel data processing and management
section. The input data are taken using raspberry pi and camera, while processing is based on Matlab algorithm for face
detection. Also the information system provides live-streaming of the base entry i.e. showing every person in the area in real
time. Administrator of the management section set the permissions gaining access into the military base. The advantage of
the system is that at the same time can make entry for more persons without delay, or if a person who has no previous
permission will immediately alarmed.
Keywords: security, control, entry, camera and monitoring.
The information system is consisted of the following
modules:
Module for collecting data (results)
Module for transferring data
Module for analyzing
Module for monitoring
Module for live streaming
Management module

1. INTRODUCTION
Security access breaches are very common and present these
days. To ensure the safety of the base and the people
working inside the base we created this security system [1]
that helps us to achieve higher security level. Entries into
the bases are control and check point where all visitors must
pass in order to enter inside. Monitoring the entry bases has
crucial part in the security of the base. Controlled and
secured access also have main role in order to have higher
security which will achieve higher safety. One of the main
problems is providing this fast and secured access control
which will provide higher level of security. So the main aim
of this paper is creating a security system which will
increase the security in the base by shortening the time of
the period needed to gain access, will alarm if a person has
no permission also will allow supervising into the entry of
the base.

The whole system is run from the administrator of the main


server which is set in the General Base. The main server,
through secured network, is connected with the control units
and the cameras at the entry into the bases.
The next table shows where the modules are located
Table 1: Module location
Module

Location
Cameras located at the entry into the
Collecting data
base
Cameras Wi FI - Control unit
Transferring data
secured network Main server
Analyzing
Main server
Monitoring
Main server
Live streaming
Control room and all allowed users
Management
Main server and Control room

Another part of this security system is the monitoring and


providing live streaming of the base. The advantage of the
monitoring is the allowance to be able to analyze the entry
into the base when that is needed. Live streaming of the
entry to the base allows following the entry any time of the
control room and from any higher command level which
have access by the administrator of the system.
All this elements are merged into one complete information
system for access control at the entry into military bases.
381

OTEH2016

SECURITYSYSTEMINMILITARYBASESWITHMATLABALGORITHM

System for access control at the entry of military bases


(SACEMB) [2] operates on the following principles; the
cameras are positioned on the entry of the base where a
person could approach so he or she could be checked if he
or she has allowed access. The person is checked by the
control room, where they have access to the main server
data base where all persons that allowed or forbidden to
have access are saved. Following that if the person is
allowed will gain access and if the person is forbidden to
gain access, the control room will be alarmed and further
measures will be taken. In addition to that if the person
previously never entered the base is scanned by the cameras
where in the control room is decided for the permission and
that person is added into the data base of the main server.

2. MODULES FOR COLLECTING DATA AND


ANALYZING DATA
The main module of this security system is the module for
collecting data. Different types cameras could be used for
surveillance over the entry into the bases from which the data
will be transferred to the control room and to the main server.

This real-time face security system is developed using


MATLAB version R2012a. A graphic user interface (GUI)
allows users to perform tasks interactively through controls
like switches and sliders. GUI could be and run it in
MATLAB or as a stand-alone application. Viola-Jones
algorithm is used for face detection in MATLAB.

Picture 1: System security map

Picture 2. Architecture of the system


Characteristics that make this algorithm a good detection
algorithm mare:
Only face detection (Its distinguish faces from non
faces, even though its only face detection it can be
reprogramed to detect cards, hands etc. )
Real time (Two frames per second must be processed
for practical application.)
Robust (very high detection and very low false-positive
rate always.)
Instead of scaling the image itself (e.g. pyramidfilters), the features are scaled the features.

2.1.VIOLA JONES ALGORITHM


Even though there are different types of algorithms used for
face detection, in this security system, we have used ViolaJones algorithm [3],[4] for face detection with MATLAB
[5] program. This algorithm works in following steps:
1.
2.
3.
4.

Creates a detector object using Viola-Jones algorithm


Takes the image from the video
Detects features
Annotates the detected features

382

OTEH2016

SECURITYSYSTEMINMILITARYBASESWITHMATLABALGORITHM

Scale and location invariant detector

5. MANAGEMENT MODULE

The algorithm has four stages:

The management module is also called permission


management module. This module is important so the whole
system could function. This module is managed and directly
by human. Data which is collected from the cameras on the
entry base and the control unit is placed in the main server.
From there, the administrator sets the permissions, which is
allowed and not allowed to enter the base, to monitor, to
alarm to the higher command levels if that is needed. Also
this module is directly connected the live streaming module,
because the administrator directly decides which commands,
people, authorities can watch the live streaming.

1.
2.
3.
4.

Haar Feature Selection


Creating an Integral Image
Adaboost Training
Cascading Classifiers

The features sought by the detection framework universally


involve the sums of image pixels within rectangular areas.
As such, they bear some resemblance to Haar basis
functions, which have been used previously in the realm of
image-based object detection. However, since the features
used by Viola and Jones all rely on more than one
rectangular area, they are generally more complex. The
value of any given feature is the sum of the pixels within
clear rectangles subtracted from the sum of the pixels within
shaded rectangles. Rectangular features of this sort are
primitive when compared to alternatives such as steerable
filters. Although they are sensitive to vertical and horizontal
features, their feedback is considerably coarser.

6. TESTING STAGE
This security system was tested during the academic year at
the military academy General Mihailo Apostolski in Skopje,
Macedonia. For this test stage of this security system there
were 30 cadets that took part in it. Their pictures were
previously saved in the data base of the main server. After
setting up the cameras and the computers in control room
and the main server, cadets tried to enter the military base.
The ones that were previously saved were allowed to enter
the base and for that part it was needed less than 30 seconds.
The cadets that did not were previously added into the data
base, raised the alarm in the control unit and had to ID
checked and after the confirmation that they are cadets, they
gained access. For this part it was needed less than 5
minutes. In addition the live streaming and the monitoring
were also tested each as singular system and as whole
system and both proved to work without any delay or major
problems.

3. MODULES FOR TRANSFERING DATA


Module for transferring data is responsible for the
coordination and for the transfer of the data between the
cameras placed on the entry of the bases [2], control
stationed near the entrance and is responsible for the live
streaming and the main room stationed in the General Base
which is responsible for monitoring and managing the
collected that also for analyzing it.

4. MODULES FOR MONITORING AND LIVE


STREAMING

7. CONCLUSION
System for access control at the entry into military bases is
created with purpose to achieve higher level of security and
provide more controlled access system in the military bases.
With this system the time for collecting and analyzing the
data is reduced to the minimum. SACEMB1 is consisted by
five main modules. The main modules are: Module for
collecting data analyzing data is one of the main modules.
The cameras are located on the entry into the base. Types of
the cameras that are used may differ. However the analyze
is done in the main server.

Module for monitoring is directly represented by the


cameras on the entry of the base. The cameras on the entry
of the base are used for filming the whole area, the and the
movement of the soldiers, the visitors or all personnel who
have access to it .As the author in [6] said that The system
employs real-time face recognition to initially detect the
presence of an authorized individual and to grant the
individual access to the computer system. In some
embodiments, the system also employs real-time face
recognition to continuously or periodically track the
continued presence of the authorized individual. Access to
the computer system is revoked when the individual's
presence is no longer detected.

Module for transferring data is responsible for transferring


any kind of the data in the whole system, trough Wi-Fi and
secured network.

Cameras on the entry of the bases are specific with major


advantages and differences from the other type of cameras.
Firstly they capture the face of the person and also to
register every movement while that person is trying to gain
access to the base.

Modules for monitoring and live streaming are modules that


will allow constant surveillance on the entry into the base.

After the caption the recorded image is transferred to the main


server where the administrator set the permissions. This is
directly connected with the module for live streaming. The
video from the entry of the base with the special permissions is
transferred to higher commands, but the proceeded images
from the camera through module for analyzing are further sent
to the main server where the administrator could set parameters
and to control the already created data base.
383

Management module is controlled by the administrator, who


set permissions for allowed access in the base. As a part of
the module for management is the database with all of the
gathered data by the cameras.
As previously mentioned in this paper that after being tested
this security system proved that it can provide more secured
access inside the base and all that by shortening the period
need to gain the access. This is huge advantage for the
armies around the world, because with security system like
1

System for access control at the entry of military bases

OTEH2016

SECURITYSYSTEMINMILITARYBASESWITHMATLABALGORITHM

this one will provide monitoring 24/7 and also will create a
data base where all persons; military or non-military who
had access will be stored at. This data base could be later
used for analyze, for both providing more informations and
upgrading this system to achieve higher level of security
inside the military bases.

[4]
[5]

[6]

References
[1] Yoder, Joseph, and Jeffrey Barcalow. Architectural
patterns for enabling application security. Urbana 51
(1998): 61801.
[2] Bartlett, Marian Stewart, Javier R. Movellan, and
Terrence J. Sejnowski. Face recognition by
independent component analysis. IEEE Transactions
on neural networks 13.6 (2002): 1450-1464.
[3] Yi-Qing Wang, An Analysis of the Viola-Jones Face
Detection Algorithm, Image Processing On Line, 4
(2014), pp. 128148. http://dx.doi.org / 10.5201 / ipol.

[7]

[8]

384

2014.104
Krishna, Sreekar. OpenCV Viola-Jones face detection
in MATLAB. Mathworks, May (2008).
A MATLAB based Face Recognition System using
Image Processing and Neural Networks Jawad Nagi,
Syed Khaleel Ahmed
Atick, Joseph J., Paul A. Griffin, and A. Norman
Redlich. Continuous video monitoring using face
recognition for access control. U.S. Patent No.
6,111,517. 29 Aug. 2000.
Face recognition by independent component analysis M. S. Bartlett ; California Univ., San Diego, La Jolla,
CA, USA; J. R. Movellan; T. J. Sejnowski
Schreier, Fred, and Marina Caparini. Privatising
security: Law, practice and governance of private
military and security companies. Geneva: DCAF,
2005.

IMAGING DETECTOR TECHNOLOGY: A SHORT INSIGHT IN HISTORY


AND FUTURE POSSIBILITIES
BRANKO LIVADA
VLATACOM INSTITUTE, Belgrade, Serbia, branko.livada@vlatacom.com
DRAGANA PERI
VLATACOM INSTITUTE, Belgrade, Serbia, dragana.peric@vlatacom.com

Abstract:Imaging detectors are, in general, used to detect radiation spatial distribution in the object space, and form
image suitable for patterns recognition visually or by other means. Different effects are used for image recording
(photochemical, photo-thermal, photo-electrical, photo-luminescent, etc.), but discovery of the photo-electrical effect
introduced a real revolution in the imaging technology providing capability for electronic imaging development.
Electronic imaging detector development began with thermal detectors (thermocouples and bolometer) but due to their
insufficient sensitivity the research was spread to photon detectors (photo-cathode, semiconductor detectors).
Semiconductor detectors have high sensitivity but some of them required cooling for proper operation. Also,
semiconductor detectors provide sensitivity in ultraviolet, visible and infrared part of electromagnetic spectrum, with
high capacity for application in scientific research and military systems. The historical aspects of the technological
issues in imaging detector technology are discussed in this paper with special attention to results achieved in Serbia
(Yugoslavia). The progress in the semiconductor industry provided increased capability for photo-detector signal read
out and processing, leading to focal plane array (FPA) development that revolutionized electronic imaging and
infrared imaging. FPA development, due to high number of detectors in detector matrix, renews interest for un-cooled
thermal detectors. The basic properties of the most important technologies are reviewed in this paper. Special attention
is paid for current development of the cooled and un-cooled FPA and expectations for further development.
Keywords: Imaging detectors, infrared technology, focalplane arrays.
to technology status analysis shows the level of
technological maturity of the environment where the
author is working.

1. INTRODUCTION
Imaging detectors are, in general, used to detect radiation
spatial distribution in the object space, and form images
suitable for pattern recognition visually or by other
means. Different effects are used for image recording
(photochemical, photo-thermal, photo-electrical, photoluminescent, etc.). To develop image detection and
recording technology people used inspiration gathered in
nature [1]. The discovery of the photo-electric effect [2]
made a revolution introducing new electronic imaging
technology [3] providing new means for image detection,
processing and recording that accelerates various new
applications.

There are few articles dedicated to imaging technology


status presentation and analysis [13-15], which indicates
the level of imaging technology development
understanding and application possibilities in Serbia
(Yugoslavia). In this article we are updating the current
technology status in Serbia.

History is teacher of the life, meaning that understanding


history provides the proper tools to increase ones
possibility to predict the future and set direction for
developmental results application. Because of that,
imaging technology development history studies generate
an important contribution in defining new development
areas.

This short insight into imaging detector technology


development followed by prediction of the developmental
path in the near future lead to practical implications of the
imaging detector technology status on current and new
areas of application. The most important and widest
spread among imaging detectors are infrared detectors. In
addition to well-reviewed worldwide imaging detector
technology history (especially infrared detector history)
we are adding a first condensed attempt to make review of
the imaging detector development and application history
in Serbia (Yugoslavia).

Infrared detector development history is extensively


reviewed in several textbooks [4]and articles [5-10], due
to its application related importance. The analysis of the
technology development status is another important
source [11-12] of information for future development
direction prediction and planning. Also, articles dedicated

The applications of imaging technology are spread over


whole
electromagnetic
spectrum
but
specific
technological solutions for imaging sensors are applied
pending on the part of the spectrum. Different detector
materials and operation temperatures are applied to
provide sufficient sensitivity. The diversity of the imaging
385

IMAGINGDETECTORTECHNOLOGY:ASHORTINSIGHTINHISTORYANDFUTUREPOSSIBILITIES

detector technology and application is shortly described to


provide better insight of the importance of related
breakthroughs in their research and development history.

OTEH2016

terahertz imaging. The wide and successful photosensitive film based application of the very short
wavelengths (X-ray) in medical imaging, gain interest for
application of the X-ray sensitive detectors to form
electronic images. Gamma ray imaging is emerging
technology.

The short highlights of imaging detector development


history (worldwide and in Serbia-Yugoslavia)
arepresented to support some conclusions about near
future application possibilities.

3. IMAGING DETECTOR TECHNOLOGY


AND APPLICATIONS

The current status of imaging detector technology and


some possible future development trends are discussed to
provide a basis for prediction of the new developments
viewable for emerging applications.

The real electronic image sensor application started using


photosensitive surfaces (photoconductive, photovoltaic
and photo emissive) widely used in broadcasting imaging
systems [16]. Although television TV, was
demonstrated about 1920, it was not in the homes before
end of WWII. Early TV camera sensors (vidicon, orthicon
tubes using photoconductive surface) are sensitive during
the day using day light to generate black and white
images but later advanced to color imaging.

2. ELECTROMAGNETIC SPECTRUM AND


IMAGING TECHNOLOGY
Up to nineteenth century, all efforts were connected with
increasing the visual capability of the human eye,
meaning, in the visible spectrum. The discoveries of
infrared - IR light (1800, W. Herschel), electromagnetic
theory (Maxwell 1860) and IR radiation Laws (1884 Boltzmann) boosted research interest to other parts of
electromagnetic spectrum. The research interest was
spread to various detector materials with proper
sensitivity. Some of them could operate at room
temperature, but some of them should be cooled to
cryogenic temperatures.

Picture 1. Electromagnetic spectrum, operating


temperature and selected detector materials
The electromagnetic spectrum - EM is shown in Picture 1,
which illustrates the most important detector materials
applicable over the EM spectrum and their working
temperatures. To provide the possibility to form proper
electronic images using single detector, opto-mechanical
scanning was applied. For detector cooling to the required
operating temperatures liquefied gas technology were
used. All this make imaging technology too complicated
and expensive, so the main applications were military and
some scientific research. Military people recognized the
power of the IR, both for imaging and non-imaging
applications, so there was a lot of investment to related
technology development. The developments in UV
lithography gained interest for UV imaging. The better
penetration of the longer wavelengths through the
atmosphere and materials,caused new interests for

Picture 2. Image intensifier tubes generations


The military needs forced application of the imaging
sensors during the night- low light conditions. For those
purposes image intensifier technology, using photoemissive surfaces electron multiplication techniquesand
electro- luminescent screens, was developed as suitable
solution.
Image intensifier technology was ready for application
during WWII starting with image converters - GEN 0
tubes and active near IR source. They use direct viewing
of the intensified image on the tube screen.Over decades
of application continuous advancement of technology
386

IMAGINGDETECTORTECHNOLOGY:ASHORTINSIGHTINHISTORYANDFUTUREPOSSIBILITIES

OTEH2016

introduces new improvements resulting in different image


intensifier tube generations as illustrated in Picture 2.

Some typical detector designs using different cooling


techniques are illustrated in Picture 4.

In the case thatelectronic images generated using image


intensifier tubes needs to be electronically transmitted or
recorded additional visible image sensors need to be
coupled to an image intensifier tube using different
approaches as illustrated in Picture 3.

The advance in the development of semiconductor IR


detector was followed by development of the new optical
materials and lens design. Limited number of detectors in
arrays required the development ofopto-mechanical
scanning techniques to provide images having sufficient
resolution (spatial and temporal).
On the other hand, advances in silicon based
microelectronic devices gained new interests because of
possibility to join newly developed processing power with
increased number of IR detectors in matrix, so focal plane
detector arrays appears as irreplaceable solution. Two
solutions appears: (a) hybrid connection of the detector
matrix with Si based readout and processing chip, (b)
monolithic solution with read out and pre-processing
electronics built in the IR material.
New applications as security and machine vision together
with achievements in silicon microelectronics gained
interest in further image processing and new video
standards developments, supporting easier video data
transmission, exchange, and fusion.
Focal plane array technology requires detector mass
production that opens the door for wider FPA application
in other non-military application.

Picture 3. Image intensified camera design options:


Using Fiber optic tapper, (B) Using relay lens
Infrared imaging, promising better penetration through the
atmosphere and completely passive operation, took a lot
attention and started new efforts in the related technology
development. It was not only a matter of photosensitive
material development but also development of other means
needed to provide required cooling and detector operation.
Lot of development was done in vacuum technology,
followed with new technological solutions in cooling
devices, providing suitable application environment in the
systems. Vacuum technology development uses results of
other applications in science and industry [17, 18].

Picture 4. Cooled detector design options:


(A) Integral Stirling Cooler, (B) Joule-Thompson cooler
(C) Liquid nitrogen

At the beginning detector cooling needs were provided


using liquid gases and detector Dewar type housing
having a transmissive window for optical coupling with
imaging lens. The next step was application of the JouleThompson mini-coolers [19], the device that produce
liquefied gas under detector created from pressurized gas.
This application was connected with further development
of the gas purifying technology and high pressure gas
containers.

4. IMAGING TECHNOLOGY HISTORICAL


HIGHLIGHTS
In the beginning, development was connected with
thermal detectors (Thermoelectric effect - 1821 Seebeck,
thermocouples - 1829 G. Nobili, thermopile - 1833 M.
Melloni which are still used today, and which are
generally sensitive to all infrared wavelengths and operate
at room temperature. Their sensitivity was not good
enough for imaging. In parallel, lots of efforts were done
in photographic film and related technology development,
but it is not applicable for electronic imaging.

The development and application of the cooling engine


based on Stirling Cycle [20,21] provides solution that
enables long term autonomous cooling for detectors.
Developments were aimed to miniaturization and longer
MTBF that resulted to compact Stirling coolers with
several tens of thousands MTBF widely available today
for integration with detectors.

The discovery of the photo-electric effect (1905 A.


Einstein) made a first great breakthrough towards real
electronic imaging.The discovery of the photosensitivity
in natural galena - PbS (1933 Kutzcher) turned research
387

IMAGINGDETECTORTECHNOLOGY:ASHORTINSIGHTINHISTORYANDFUTUREPOSSIBILITIES

direction towards semiconducting materials. At the end of


WWII, the first detectors produced using artificially made
PbS, PbTe, or PbSe materials were released leading to
mass production in 1955 used for missile guidance
systems [22]. Cooled single element InSb, MWIR
detector, (1955 - T.S. Moss), were seen as perspective
candidate for guided missile homing heads.

OTEH2016

electrical materials [28]. The development of the


Molecular Beam Epitaxial MBE technology, provides
new opportunities in semiconductor material creation and
development of the Quantum Well Infrared Photodetectors (QWIP technology) [29].
An imaging technology time line highlighting the most
important achievements is illustrated in Picture 5
including both, semiconductor and photo-emissive
detectors.

LWIR detectors were based on the two competing


materials HgCdTe (1960 - Lawson) and PbSnTe (1967).
Initial difficulties with HgCdTe material uniformity and
instability, kept PbSnTe in the race, but discovery of the
native oxide based surface protection (mid-seventies) and
HgCdTe epitaxial layer technology (mid-eighties) make
that HgCdTe remain as the material of choice [23,24].

A similar technology development timeline related to


Serbia (Yugoslavia) is presented in Picture 6, showing
that there was some timely conducted research.
The imaging technology development in Serbia in the
beginning followed world trends up to end of eighties but
later it was damped down due to lack of investment.

Picture 5. Imaging Technology Development Time Line


Picture 6. Imaging technology time line in Serbia
(Yugoslavia)

The second great breakthrough was the discovery of the


silicon based Charge Coupled Device - CCD (1970 Smith & Boyle) [25]. After that all development efforts
were concentrated to Focal Plane Array - FPA
technology. One of the most important applications was
the development of CMOS silicon FPA [26] using
multiplexing for pixel detector signal read-out. The
development of the silicon based microelectronic devices
and application of the sub-micron lithography introduced
new image processing power and boost further
developments in the imaging technology.

However, there are several achievements we can be proud


of. The initial efforts in the PbSnTe material study [30],
production and application of the PbS detectors, cooled
InSb detector technology development [31], including
material, detector Dewar housing and Joule-Thomson
mini cooler. The partial success in the HgCdTe
technology deserves to be mentioned, especially in the
area of the un-cooled detectors [32, 33].
Image intensifier tube production started in the eighties is
still active introducing new innovations including GaAs
photocathode, micro-channel plate - MCP, and gated
power supply production in country.

The 21th century introduced room temperature operation


FPA based on micro-bolometer MEMS technology
(amorphous silicon, vanadium oxide) [27], and pyro-

388

OTEH2016

IMAGINGDETECTORTECHNOLOGY:ASHORTINSIGHTINHISTORYANDFUTUREPOSSIBILITIES

Also, there are some successes in application of the


nanotechnology for IR detector manufacturing [34].

multimodal sources and optimizing computational


complexity and needed time for algorithm runtime.
Some selected examples of the modern IR detectors are
shown in Picture 7.

5. IMAGING TECHNOLOGY CURRENT


STATUS AND FUTURE TRENDS
The maturity of the IR Focal Plane Array technology,
increase the availability of both cooled and un-cooled
detectors for other applications away of previously
dominant military applications. Also, Short-Wave
Infrared - SWIR spectral region shows a high capacity for
commercial application in remote sensing. This created
new demands for IR imaging systems in non-military
security applications, too [35]
Current trend in development of infrared detectors are
generally in miniaturization, resolution improvement,
system lifetime extension:

(A)

For mid-wave infrared (MWIR) cooled detectors:

(B)

Picture 7. Modern infrared detector examples:


(A) Cooled detector FPA with integral Stirling cooler (B)
un-cooled bolometer FPA with integrated image engine

- Miniaturization of the pixel pitch to less than 10 m,


which enables high definition resolution in the same FPA
form size [36].

6. CONCLUSION

- Mature development of new materials (named HOT


technology available for both InSb and HgCdTe) which
enables operation to be higher 150 K (instead of 77K).
This step in development enables usage of long life time
coolers, which now have to use much less power to cool
detectors to their operating temperature.

The short insight in history of the research and


developments of the imaging technologies shows a few
facts: (a) once when practical need is recognized and
enough efforts are done, one can get related results, but a
lot of work should be done in various areas and enough
time should pass until it is achieved; (b) in the science and
technological development everything is connected.

For long-wave infrared (LWIR) un-cooled detectors:


- Commercialization of infrared solutions has increased
due to un-cooled small detectors enabling IR channel to
be inserted into a Android device [37].

Nowadays, imaging technology is mature extending


development results in the new industries, which are now
diversified and specialized: detector industry, optical
industry, electronic processing - imaging engine industry
and finally systems integrator industry. This opens a
chance to other players out of exclusive detector
technology owners, leading to new and exciting
applications.

- Resolutions are also increased due to smaller pixel pitch


(currently 17 m).
Possibility of various applications boosted progress in the
development of un-cooled SWIR detectors:
- Increasing resolution of FPA in 15 m pixel pitch

References
[1] Wolken,J.: Light Detectors, Photoreceptors and
imaging Systems in Nature, Oxford University press,
New York, 1995
[2] Pranawa c. Deshmukh Shyamala Venkataraman,
100 Years of Einstein's Photoelectric Effect,
Bulletin of Indian physics Taechers Association,
September and October issues, 2006
[3] Rose,A.: Vision: Human and Electronic, Plenum
Press, New York, 1973
[4] Rogalski,A.: Infrared Detectors, CRC press,
Teylor&Hobson Group, Boca Raton, 2011
[5] Rogalski,A.: Infrared detectors: an overview,
Infrared Physics & Technology, vol.43, pp. 187-210,
2002
[6] Corsi,C.: TUTORIAL REVIEW: History highlights
and future trends of infrared sensors, Journal of
Modern Optics, Vol. 57, No. 18, 20 October 2010,
16631686
[7] Sizov,F.F.: Infrared detectors: outlook and means,

- Enabling TEC-less detector production with much lower


production costs.
Overall imaging device capabilities are going to increase
through:
- development and application of new read-out, image
pre-processing and enhancement techniques through
application of the more complex electronic boards and
design of proper video engine [38,39] in addition to basic
detector material developments
- Application of the un-cooled bolometric FPA in
emerging terahertz imaging[40].
On a system level there is a pipeline of sophisticated
image processing which starts with video acquisition and
calibrations, after which image enhancement algorithms
are applied. In a multisensory imaging system, video
processing may include image fusion of multiple sensors
to be shown on a single display. This concept can
decrease necessary communication bandwidth for video
transmission, which is critical in mobile applications.
Extensive research is done in this area for combining
389

IMAGINGDETECTORTECHNOLOGY:ASHORTINSIGHTINHISTORYANDFUTUREPOSSIBILITIES

Semiconductor Physics, Quantum Electronics &


Optoelectronics, 2000, V.3, pp 52-58
[8] Nibir k. Dhar, Ravi Dat, and Ashok K. Sood,
Advances in Infrared Detector Array Technology,
InTech(open
source
article),
Ch
7.,
http:/dx.doi.org/105772/51565
[9] Rogalski,A.: History of infrared detectors,
Optoelectronics Review No 20(3), 2012, pp. 279
308
[10] Chandler Downs and Thomas Vandervelde,
Progress in Infrared Photodetectors Since 2000,
Sensors 2013, (open source article), vol.13, 50545098; doi:10.3390/s130405054
[11] Rogalski,A.: New trends in infrared detectors
technology, Optoelectronics Review No 3, 1993, pp.
95-106
[12] Rogalski,A.: Infrared Detectors: status and trends,
Progress in Quantum Electronics, Vol. 27 (2003)
pp.59-210
[13] Livada,B.: Vojne primene termovizijskih uredjaja,
KNTI 10, Vojnotehnicki Institut,, 2000 (Military
Applications of the Thermal Imaging Equipments,
Military Technical Institute Publications, series,
Cumulative Scientific Technical Information, No 10,
2000,)
[14] Livada,B.: Boundary values of parameters and
perspectives for the development of sensors for
thermal imaging equipments,- (edited translation of
"Granine vrednosti parametara i perspektiva
razvoja detektora za termovizijske ureaje,
VojnotehnikiGlasnik, vol 31. No 1, JanuaryFebruary 1983, pp 37-45), FTD-ID(RS)T-1793-83
report published as DTIC ADA137185, 1984,
[15] Livada,B., Babic,V.: Nocne optoelektronske sprave
sa pojacavacima slike (Electro-optical night vision
devices with image intensifiers), Vojnotehnicki
Glasnik, No1, Januar-Februar 1992, 18-34
[16] Wayant,R.W., Ediger M.N.: (Editors), Electro-Optics
Handbook, McGraw-Hill Companies, Inc., New
York, 2000
[17] Yoshimura,N.: Vacuum Technology- Practice for
Scientific Instruments, Springer-Verlag, Berlin
Heidelberg, 2008
[18] Hoffman,D., Singh,B., Thomas,J.III: (Editors),
Handbook of Vacuum Science and Technology,
Academic Press, New York, 1998
[19] Ben-Zion Maytal, Miniature Joule-Thompson
Cryocooling - Principles and practice, Springer
Science+Business media, New York, 2013
[20] Klaus,D. Timmerhaus, Richard P, Reed (Editors),
Cryogenic Engineering - Fifty Years of Progress,
Springer Science+Business media, New York, 2007
[21] Thomas M. Flyn, Cryogenic Engineering, Marcel
Dekker, New York, 2005
[22] Willardson,R.K.,
Beer,A.C.:
(editors),
Semiconductors and Semimetals, Vol.5 "Infrared
Detectors", Academic Press, New York, 1970
[23] Willardson,R.K.,
Beer,A.C.:
(editors),

OTEH2016

Semiconductors and Semimetals, Vol.11"Infrared


Detectors II", Academic Press, New York, 1977
[24] Capper,P., Elliott,C.T.: (Editors), Infrared Detectors
and Emitters: materials and Devices, Springer
Science+Business media, New York, 2001
[25] George E. Smith, "The Invention and Early History
of the CCD", Nobel Lecture, December 8, 2009
[26] Bigas,M., Cabuja,E., Forest,J., Salvi,J.: Review of
CMOS Image Sensors, Microelectronics Journal
vol.37 (2006) PP. 433-451
[27] Piotrowski,J.,
Rogalski,A.:
High
Operating
Temperature Infrared Photodetectors, SPIE Press,
Bellingham, USA , 2007
[28] Kruse,P.W., Skatrud,D.D.: (Editors), Uncooled
Infrared
Imaging
Arrays
and
Systems,
Semiconductors and Semimetals Vol. 47, Academic
Press, New York, 1997
[29] Schneider,H., Liu,H.C.: Quantum Well Infrared
Photodetectors, Springer-Verlag, Berlin Heidelberg,
2007
[30] Nikolic,P.: Some Optical Properties of Lead-TinChalcogenide
Alloys.
Publications
de
la
faculted'Electotechnique da l'Universite a Belgrade,
No354-356, 1971
[31] uri,Z.,
Livada,B.,[...],
Lazi,Z.:
Quantum
efficiency and responsivity of InSb photodiodes with
Moss-Burstein effect, Infrared Physics, Vol.29, No1,
(1989), pp. 1-7
[32] uri,Z., Jaki,Z., [...] Lazi,.: Some theoretical
and technological aspects of uncooled HgCdTe
detectors: a review, Microelectronics Journal vol.25
(1994) PP. 99-114
[33] Piotrowski,J.,
Rogalski,A.:
Uncooled
long
wavelength infrared photon detectors, Infrared
Physics & Technology, vol.46, pp. 115-131, 2004
[34] Jaksi,Z.:
Micro
and
Nanophotonics
for
semiconductor
Infrared
Detectors,
Springer
International publishing, Switzerland, 2014
[35] Corsi,C.: Infrared: A Key Technology for Security
Systems, Advances in Optical Technologies, Volume
2012, Article ID 838752
[36] Gershon,G., Albo,A., [...], Shkedy,L.: 3 MEGAPIXEL InSb INFRARED DETECTOR WITH 10 m,
Proc. SPIE 8704, Infrared Technology and
Applications XXXIX, 870438 (June 18, 2013);
doi:10.1117/12.2015583
[37] FLIR LEPTON Long Wave Infrared (LWIR)
Datasheet
[38] Spieler,H.: Imaging Detectors and Electronics-Aview
of the future, IWORID 2003, Riga, Latvia, Sept 711, 2003
[39] Kenneth,I.Schultz, Michael,W.Kelly,[..], James,R.
Wey: Digital-Pixel Focal Plane Array Technology",
Lincoln Laboratory Journal, Vol. 20, No2, 2014
[40] Rogalski,A., Sizov,F.: Terahertz detectors and focal
plane arrays", Opto-Electronics Review, vol.19, No
3. 2011, pp.346-406

390

IMAGE QUALITY PARAMETERS: A SHORT REVIEW AND


APPLICABILITY ANALYSIS
JELENA KOCI
VLATACOM INSTITUTE, Belgrade, Serbia, jelena.kocic@vlatacom.com
ILIJA POPADI
VLATACOM INSTITUTE, Belgrade, Serbia, ilija.popadic@vlatacom.com
BRANKO LIVADA
VLATACOM INSTITUTE, Belgrade, Serbia, branko.livada@vlatacom.com

Abstract: Appearance and application of the electronic imaging sensor during second half of the 20th century, and
tremendous growth in use of digital images in 21st century, cause development of the new techniques for image
acquisition processing, transmission and reproduction. In addition, the huge amount of data contained in digital image
required development of the different compression techniques. All these processes introduce degradations in the image
quality. In most cases the images areintendedto be used bya human observer, so human vision perceptual capabilities
appears as key criteria for image quality assessment. The human visual perception models are reviewed in the paper.
The imaging application in the different areas as digital cinema, medical imaging, and military target acquisition
requires different image quality criteria development. The selected image and imaging system properties (size and field
of view, image brightness, contrast ratio, resolution and MTF, noise) are reviewed. Image quality assessment could be
done through objective laboratory measurements or using various computational techniques. The suitability of the
different image quality assessment techniques for various applications is discussed.
Keywords: Image quality assessment, Perceptual image processing, Visual perception, Human visual system

1. INTRODUCTION

The development of the system of image quality


parametersstarted with appearance of electronic imaging
systems for image broadcasting (television) [1,2]. The
appearance of the digital images due to digitalization of
analog image signals and digital image sensors, e.g. focal
plane arrays, introduces a need for image compression
due to digital memory restrictions. Digital image
compression algorithms need to be designed to provide
minimal loss of information. Further, development of the
image processing and enhancement techniques leads to
development of the image quality assessment criteria.
Image quality assessment criteria are well reviewed in
books [3-6] and review articles [7,8]. One interesting
approach in defining electronic image information
capacity could be used as a base for generalized
comparison of various imaging systems [9].

Modern technology development provides different


means for electronic image formation, distribution and
processing leading to diversified applications of electronic
images. Quality of generated electronic image is very
important topic when it comes to optimization of image
applications.
Electronically generated images could be used for
observation by humans or for further processing and
extraction of information to be used for dedicated
purpose. In the case when the images are used by human
observers their quality relates to human perception and
judgment. It is necessary to develop adequate models of
observer reaction, in order to predict quantitatively or
qualitatively objective parameters of an image
usability,i.e. to determine related image quality
parameters. Since virtually all useful image processing
techniques produce objective changes, it is necessary to
provide an objective measure to provide information of
how these changes affects image information content. To
get a proper measure of processing effectiveness it is
useful to have defined set of image quality parameters. In
the case whenthe processed image is intended for
presentation and usage by human observers, a model of
the human visual information performance should be
involved. Otherwise, the criteria for application of the
image information could be used as image quality criteria.

Nowadays, digital imaging is incorporated in everyday


life (TV broadcasting technology, mobile technology,
cinema, medicine, machine vision, etc.) motivating any of
us to judge and have some opinion about image quality.
On the other hand, image recording, transmission,
reproduction require additional image processing that
need to keep undisturbed initial image content.
This article addresses the issues of image quality
description and quality assessment approaches in the
various fields of digital image applications. It is important
to identify common principles that connect parameters of
the image-forming systems, images perceived by human
391

IMAGEQUALITYPARAMETERS:ASHORTREVIEWANDAPPLICABILITYANALYSIS

observer and the electronically processed images for


defined purpose. In addition, correlation between image
quality parameters related to experimental evaluation and
computational evaluation are identified and compared.

OTEH2016

level of illumination (see Picture 1). Further development


introduced models that involved HVS motion sensitivity
(both eye motion and motion in image), temporal
sensitivity [14-16] and color sensitivity [17, 18]. There is
a lot of work to involve attention, adaptation and image
content in related HVS models and facilitate new more
complete and generalized model developments. In the
same time there is need for new systematic
psychometrical measurement tailored to support
mathematical modeling [10]. This is rapidly developing
area requiring new break through to support new image
processing needs.

This short critical review is done to support our further


research of the image quality assessment methods
applicable for the surveillance systems. As a result of an
initial study of image quality requirements for multisensor surveillance systems the key parameters of the
imaging systems and image quality based on human
visual system (HVS) performances are identified.They
will be studied in more details in further image quality
assessment procedures development. The HVS models
are shortly described as introduction to discussion about
image quality assessment (IQA) meaning and approaches.
The IQA related different applications are discussed.
Perceptual and computational IQA methodology is
reviewed, with attention to related measurement methods.
This paper is an introduction insystematic research that
needs to be done, pointing out issues and topics that will
to be studied in more details in the future work.

The selected HVS properties describing limiting


possibilities are [19-21]:
Contrast sensitivity as illustrated in Picture 1,
Resolution power (Nyquist limit) 56 cycles/degree,
Visual acuity limit 1 arc-minute, minimum
perceptible limit 0.3 arc-minute,
Dynamic Range 10-6 -106 nits,
Critical Flicker Frequency CFF 60 72 Hz.

3. IMAGE QUALITY ASSESSMENT


MEANING AND APPROACHES

2. HUMAN VISUAL SYSTEM MODELS


Performance parameters of human vision are the key
limiting factor for perception and extraction of
information contained in the image. The visual
information perception by human observer could be used
directly for image quality assessment through
psychophysical measurements. Psychophysical measures
of the image quality are too costly and time consuming
for evaluation of the impact that each algorithm
modification might have on image quality[10]. On the
other hand, it is convenient to have analyticalmodel of the
human vision system to be incorporated in various
algorithms for image compression or processing.

The measurement of image quality is not easy to define,


as it often depends on context and application
specificities. Image quality as perceived by human
observers is a measurable and consistent property, even
when comparing images with different content and
degradation types. Threshold values for selected
perceptual parameters could be set in accordance with
image application and applied on consistent and
repeatable way. The change of application will require
change in the selected parameters accordingly.
The key factors affecting image quality image are:
Geometrical - including all aspects of the optical systems
and sensors and scene or image structure.
Radiometrical - This includes grey level values within a
certain dynamic range and colors (imager spectral
sensitivity range, and output image spectral range).
The most often used image quality metrics,developed for
different image applications, are based on the following
approaches:
Wave front error analysis (peak to valley, RMS),
related to image forming optics,
Fractional encircled energy analysis,
Resolution test charts application (starting from 1959
USAF three bar test chart),
Imagery interpretability rating,
Johnson's Criteria related procedures,
General image quality equation.

Picture 1. Contrast sensitivity function dependence on


spatial frequency and illumination level

The way how one usesthese metrics for image quality


assessment depends on image application (image quality
assessment purpose).

Modeling of human vision has a long development


historybased on the results of psychometrics results and
defined needs for aimed application. The basic principles
[11,12] are based on proper analytical modeling starting
from known experimental results. One of the best known
models [13] is based on the modeling of the contrast
sensitivity function dependence on spatial frequency and

Core properties of imaging system are related to image


quality.Although these properties are not sufficient and
cannot be used as a measure of image quality, they could
be used for analysis of how imaging chain contributes to
image quality. The generalized structure ofimaging chain
392

OTEH2016

IMAGEQUALITYPARAMETERS:ASHORTREVIEWANDAPPLICABILITYANALYSIS

is shown in Picture 2.

cover the most critical parameters for target application.


The usage of the related test charts patterns is popular
approach because of the simplicity of the application.
Design of test charts is often intuitive, but it can be very
inventive process.

Image quality could be closely connected to following


imaging system parameters:
MTF Modulation Transfer Function,
CTF Contrast Threshold Function,
Noise - Noise Equivalent Irradiation (NEI), Noise
Equivalent Quanta - NEQ, Noise Equivalent
Temperature Difference NEDT,
SNR Signal to Noise Ratio,

5. PERCEPTUAL IMAGE QUALITY


ASSESSMENT
Perceptual image quality is correlated to observers
opinion, means that image quality is judged according to
human perception properties. Perceptual IQA is always
involved when human observant judgment is involved in
experimental image examination, but also when HVS
models are included in computational algorithms for
efficacy of the image processing procedures analysis.
Also, HVS limitations could be used for threshold values
definition in the video signal spreading through imaging
chain. Only in the systems that use image as a source for
pattern recognition and automatic reaction using
predefined procedures, the objective parameters can be
used for image evaluation.

JND Just noticeable difference.

6. COMPUTATIONAL DIGITAL IMAGE


QUALITY ASSESSMENT

Picture 2. The generalized imaging chain


Image quality assessment is based on image visual
perception testing (perceptual image quality assessment)
related to human visual system perceptual properties or
image information content analysis digital image
computational assessment.

Image quality degradation is caused by applying


techniques for image acquisition processing, transmission
and reproduction. This is also emphasized by using
various compression algorithms. For all systems that deal
with image acquisition, management, communication and
processing, ability to have control over all image
processing pipeline is of great importance [6].

4. IMAGE QUALITY ASSESSMENT IN


DIFFERENT APPLICATIONS

The main goal is to obtain some quantitative value that


will be generated by appropriate metrics. In this section
we will describe three objective quality assessment
methods for:
FR-IQA (full reference image quality assessment),
RR-IQA (reduced reference image quality
assessment),
NR-IQA (no reference image quality assessment).

Image quality assessment is related to electronic images


including digital images. The IQA approach and method
for image evaluation depends on the structure of image
forming and image presentation systems, [24]. IQA
procedures could be applied either at the end of imaging
chain or at any breakpoint in imaging chain. Generally
speaking there are three groups of IQA procedures related
to type and functions of the imaging system:
Image forming systems,
Image compression, processing and transmission
systems,
Image presentation systems displays.

Full reference (FR)image quality metrics requires a full


access to the reference image. As stated in [27], the simplest
and most widely used full reference quality metric is the
mean squared error (MSE). It is calculated by:

According to the area of image application there are some


IQA specificities in the following areas:
Cinematography and TV,
Still photography,
Machine vision,
Medical imaging and displays, [22, 23],
Computer graphics,
Surveillance and targeting systems.

MSE = 1
N

[ X ( i , j ) Y ( i, j )]

(1)

i =0

where X and Y are reference and image under test,


respectively. Additional metric derived from MSE is peak
signal to noise ratio defined as:
( 2 B 1)2

PSNR = 10 log
MSE

The most of imaging system applicationsis connected


with image presentation to human observer and in that
case IQA depends on human visual information
perception possibilities. The HVS involvement means that
perceptual IQA is applied either using human observer
judgment or application of judgment filters designed by
using HVS model.IQA methods are always developed to

(2)

where the B represents the number of bits per pixel of the


image.
Venkataet al.[28] proposed a distortion measure (DM) as
393

IMAGEQUALITYPARAMETERS:ASHORTREVIEWANDAPPLICABILITYANALYSIS

luminance coefficients.

well as a noisequality measure (NQM) to quantify the


impact on the human visual system (HVS) of frequency
distortion and noise injection in image restoration, where
they derived a 2-D distortion transfer function for
modeling the linear distortion effects presented in
restoredimages.

A new approach that utilizes the joint statistics of


twotypes of commonly used local contrast features in
presented in [36]. Initially, the gradient magnitude (GM)
map is used and after that, the Laplacian of Gaussian
(LOG) response. An adaptive procedure is then applied to
jointly normalize the GM and LOG features, and show
that the joint statistics of normalized GM and LOG
features have desirable properties.

Structural similarity metric (SSIM) [29] is used to


improve traditional algorithms MSE and PSNR. There is
also an extension for this metrics known as MS SSIM.
Wang et al. [30] defined as:
MSSSIM = 1
N

7. IMAGE QUALITY PARAMETERS


MEASUREMENT METHODS

( SSIM )

OTEH2016

(3)

The most common image parameters used


experimental image quality assessment are[37-41]:
OECF - Optoelectronic conversion function
Image size and resolution
Noise (Signal to noise ratio)
Image contrast (contrast ratio)
Image sharpness
Grey levels
Black level
Gamma calibration
White saturation / balance
Response time
Color properties
Viewing angle

i =1

This means that MSSSIM represents the average of all


windows which have contribution in the final equation.
Reduced reference (RR) image quality metrics relates to
the situation when the parts of the image are available as
reduced features from reference image. RR is often used
in transmission of the video data through complex
communication networks for tracking degree of visual
degradation. In this case receiver doesnt have access to
the original video data [6]. Jin Jan et al. [31] proposed a
RR image quality assessment based on visual information
fidelity. They compute separately quantities of the
primary visual information and the residual uncertainty of
an image and then evaluate quality assessment for these
two types of information.

in

The most common experimental IQA methods are based


on the application of the specially designed test charts in
controlled laboratory conditions using trained observers.

Rehman and Wang [32] estimate SSIM, as widely used


technique for FT image quality assessment. They found
that relationship between the FR SSIM measure and their
RR estimate is linear when the image distortion type is
fixed. In this paper novel idea of partially repairing an
image using RR features is introduced.Soundarajan and
Bovic [33] used image information changes and measured
them between the reference and natural image
approximations of the distorted image. As the result, they
designed algorithms that measure differences, as
perceived by humans, between the entropies of wavelet
coefficients of reference and distorted images.
No reference (NR) image quality metrics is the most
demanding metrics. This is, as stated in [6], the most
difficult problem in the field of image analysis. This
metrics should bring some conclusions based only on
captured (or give in the other way) image. This is very
inconvenient task for machines, but for human observers
it is quite easy.

Picture 3: Radiological Display test chart (DICOM)

In [34] Ye and Doermann propose a general-purpose NR


image quality assessment method. This approach is based
on visual codebooks that consist of Gabor-filter-based
local features which are extracted from local image
patches and used to capture complex statistics of a natural
image. In comparison with other methods, the proposed
one does not assume any specific types of distortions.
In [35] Mittal et al. propose a NR image quality
assessment model that operates in the spatial domain. In
order to be able to quantify possible losses of
naturalness in the image due to the presence of
distortions they use scene statistics of locally normalized
394

OTEH2016

IMAGEQUALITYPARAMETERS:ASHORTREVIEWANDAPPLICABILITYANALYSIS

Picture4: Projection systems test Chart (SMPTE)

Picture 8: Color reproduction test chart (Macbeth)

Picture 9. Camera focus test chart

Using Johnsons criteria and adding additional features


the new and improved model for targeting task
performance modeling were developed [25, 26].

Picture5: Broadcasting systems test chart (IEEE)

Test charts are designed for particular purpose (see


Pictures 3 to 6) and contain patches aimed for evaluation
of the designated performance. Using HVS based image
information perception criteria developed for image
intensifier based night vision criteria (Johnsons criteria
[25]) the 1951 USAF test chart was developed for
resolution testing of the imaging systems (see Picture 7).

The examples of application of test chartsused in image


quality evaluation show effectivity of methods proposed
in [37, 38]. There is a lot of effort in order to standardize
imaging system image evaluation procedures [39-41] but
this task is not finished yet.Some test charts are developed
for aimed purposes as color reproduction testing (see
Picture 8) and camera focus testing (see Picture 9), are
widely adopted together with a plenty custom designed
test charts for special purposes.

8. CONCLUSION
Electronic imaging (particularly digital imaging)
introduces revolution in image application spreading it in
all aspects of our living and also modern technology
development. Image quality is one of the key aspects of
image application.
Image quality parameters are closely connected with
imaging system performances but cannot be replaced with
them due to complexity of perception of the human visual
system. The study of the HVS performances is one of the
key sources for defining image quality.Image quality is
usually considered as a measure of visual impression, but
human perception of visual information depends on many
different factors: sharpness, contrast, colorfulness and
personal preferences.

Picture 6. Imaging camera systems test chart (ISO)

In the case that image data are used for automatic pattern
recognition (without human observer influence) some
image quality parameters could be derived directly from
the imaging system.
Image quality assessment methods are still developing
and they are closely connected to the purpose of image
data evaluation, providing only particular answers as: how
good is image generation system, how the image will be
perceived by human observer, or what disturbances in
image are introduced during image processing,
compression or transmission. No universal system of
image quality parameters is viewable at the moment.

Picture7. Imaging systems test chart (USAF 1951)

References
[1] Albert Rose, Vision: Human and electronic, Plenum
Press, New York, January, 1971

395

IMAGEQUALITYPARAMETERS:ASHORTREVIEWANDAPPLICABILITYANALYSIS

OTEH2016

[19] Jerry Whitaker, Blair Benson (editors), Standard


Handbook of Video and Television Engineering, The
McGraw-Hill Companies, 2003
[20] W .N .Charman, Optics of the Eye, Ch24, Volume
1, Handbook of Optics (Michael Bass , editor in
chief . 2nd ed .), 1995 by McGraw-Hill , New
York,
[21] David G. Curry, Gary Martinsen, and Darrel G.
Hopper, Capability of the human visual system,
Proceedings of SPIE. Vol. 5080 "Cockpit Displays
X, (Darrel Hopper, Editor), 2003
[22] American Association of Physicists in Medicine,
ASSESSMENT OF DISPLAY PERFORMANCE
FOR MEDICAL IMAGING SYSTEMS, AAPM
ON-LINE REPORT NO. 03, 2005
[23] John A. Carrino, Digital Image Quality: A Clinical
Perspective, Ch 3, .in Quality assurance, The
Society for Computer Applications in Radiology,
Great Falls, VA (2003): 29-37.
[24] John Johnson, Analysis of Image Forming
Systems, Image intensifier Symposium, Fort
Belviour, VA, USA
[25] Richard H. Vollmerhausen, Eddie Jacobs, The
Targeting Task Performance (TTP) Metric A New
Model
for
Predicting
Target
Acquisition
Performance, Technical Report AMSEL-NV-TR230
[26] Richard H. Vollmerhausen, Representing the
observer in electro-optical target acquisition
models, OPTICS EXPRESS, September 2009, Vol.
17, No. 20
[27] Wang, Z., Bovik, A. C., Sheikh, H.R., Simoncelli, E.
P., Image Quality Assessment: From Error
Measurement to Structural Similarity,
IEEE
TRANSACTIONS ON IMAGE PROCESSING,
VOL. 13, NO. 1, JANUARY 2004.
[28] Venkata, N. D., Kite, T. D., Geisler, W. S., Evans, B.
L., Bovik, A. C., Image Quality Assessment Based
ona Degradation Model, IEEE TRANSACTIONS
ON IMAGE PROCESSING, VOL. 9, NO. 4, APRIL
2000.
[29] Wang, Z., Bovik, A.C., Sheikh, H. R., Simoncelli,
H.R. Image Quality Assessment: From Error
Measurement to Structural Similarity, IEEE
TRANSACTIONS ON IMAGE PROCESSING,
VOL. 13, NO. 1, JANUARY 2004.
[30] Wang, Z., Simoncelli, E. P., Bovik, A. C.,
Multiscale structural similarity for image quality
assessment, Proceedings of the 37th IEEE Asilomar
Conference on Signals, Systems and Computers,
2004.
[31] Wu, J., Lin, W., Shi, G., Liu, A., ReducedReference Image Quality Assessment with Visual
Information
Fidelity,
IEEE
Transactions
onMultimedia,2013.
[32] Rehman, A.,Wang, Z., Reduced-Reference Image
Quality Assessment by Structural Similarity
Estimation,IEEE
Transactions
onImage
Processing,2012.

[2] William F. Schreiber, Fundamentals of Electronic


Imaging Systems. Some aspects of Image
processing, Springer-Verlag Berlin, Heidelberg,
1986
[3] Joyce E. Farrell, " Image quality Evaluation", CH15
in Colour Imaging: Vision and Technology (Editors:
L.W.MacDonald and M.R.Luo), John Wiley and
Sons Ltd, New York, 1999
[4] Brian W Keelan, Handbook of Image Quality
Characterization and Prediction, Marcel Dekker Inc.,
New York, 2002
[5] H.R.Wu K.R. Rao (Editors), Digital Video Image
Quality and Perceptual Coding, CRC Press, Boca
Raton, 2006
[6] Wang, Z., Bovik, Modern image quality assessment,
Morgan & Claypool publishers, 2006
[7] ShrutiSonawane, A. M. Deshpande, " Image Quality
Assessment
Techniques:
An
Overview",
International Journal of Engineering Research &
Technology (IJERT), Vol. 3 Issue 4, April 2014,
pp 2013-2017, ISSN: 2278-0181
[8] Peter D. Burns and Don Williams, Ten Tips for
Maintaining Digital Image Quality, Proc. IS&T
Archiving Conf., pg. 16-22, IS&T, 2007
[9] Igor I. Taubkin, Mikhail A. Trishenkov," Information
Capacity of electronic vision systems", Infrared
Physics and Technology, Vol.37, 1996, pp. 675-693
[10] Thom Carneyab, Stanley A. Kleinb, [..], Miguel P.
Ecksteini, " The development of an image/threshold
database for designing and testing human vision
models", Proc. SPIE 3644, Human Vision and
Electronic Imaging IV, 542 (May 19, 1999);
doi:10.1117/12.348473
[11] Mark S. Rea, |Toward a Model of Visual
Performance: Foundation and Data, Journal of the
Illumination Engineering, Summer 1986, pp. 41-57
[12] Mark S. Rea, Toward a Model of Visual
Performance: A Review of Methodologies, Journal
of the Illumination Engineering, Winter 1987, pp.
128-147
[13] Peter G. J. Barten, Contrast Sensitivity of the Human
Eye and Its Effects on Image Quality, SPIE Optical
Engineering Press, 1999.
[14] Andrew Watson, Ch6 " Temporal Sensitivity" in
Handbook of perception and human performance,
(Kenneth R Boff; Lloyd Kaufman; James P Thomas,
New York : Wiley, 1986.editors )
[15] Andrew B. Watson, Invited Paper: The Spatial
Standard Observer: A Human Vision Model for
Display Inspection, SID 06 DIGEST, SSN00060966X/06/3702-1312, pp.1312-1315
[16] Andrew Watson, Model of human visual motion
Sensing, Journal of Optical Society Of America A,
Vol.2 No.2, February 1985
[17] M. D. Fairchild, Color Appearance Models, Addison
Wesley, 1997.
Green,
Lindsay
MacDonald,
Colour
[18] Phil
Engineering: Achieving Device Independent Colour,
John Wiley & Sons, Ltd. New York, 2002
396

IMAGEQUALITYPARAMETERS:ASHORTREVIEWANDAPPLICABILITYANALYSIS

OTEH2016

[37] J. Koci, M. Radisavljevi, S. Vuji, I. Popadi,


Comparative Analysis of Image Quality for LowLight Cameras in Low Light Conditions, Proc.
ofthe 60th ETRAN Conference, 2016
[38] Kyung-Woo Ko, Kee-Hyon Park, and Yeong-Ho Ha,
Evaluation of camera performance using ISO-based
criteria, 16th Color and Imaging Conference Final
Program and Proceedings, pp. 238-242 , Society for
Imaging Science and Technology, January 1, 2008
[39] DietmarWueller, Proposal for a Standard Procedure
to Test Mobile Phone Cameras, Proc. of SPIEIS&T Electronic Imaging, SPIE Vol. 6069, 2006
[40] UL2802 Camera Performance Measurement, UL
LCC, 2013
[41] ISO 12233 PhotographyElectronicstill-picture
cameras -Resolution measurements, ISO, 2000

[33] Soundararajan,R.,
Bovik,
A.C.,
RRED
Indices:Reduced Reference Entropic Differencing
for ImageQualityAssessment,IEEE Transactions on
Image Processing,2012.
[34] Ye, P., Doermann, D., No-Reference Image Quality
Assessment Using Visual Codebooks, Image
Processing, IEEE Transactions on, 2012.
[35] Mittal, A., Moorthy, A.K., Bovik, A.C., NoReference Image Quality Assessment in the Spatial
Domain, Image Processing, IEEE Transactions on,
2012.
[36] Xue, W., Mou, X., Zhang, L., Bovik, A.C., Feng, X.,
Blind Image Quality Assessment Using Joint
Statistics of Gradient Magnitude and Laplacian
Features, Image Processing, IEEE Transactions on,
2014

397

MULTI-SENSOR SYSTEM OPERATORS CONSOLE: TOWARDS


STRUCTURAL AND FUNCTIONAL OPTIMIZATION
DRAGANA PERI
VLATACOM INSTITUTE, Belgrade, Serbia, .peric@vlatacom.com
SAA VUJI
VLATACOM INSTITUTE, Belgrade, Serbia, vujic@vlatacom.com
BRANKO LIVADA
VLATACOM INSTITUTE, Belgrade, Serbia, branko.livada@vlatacom.com

Abstract:Multisensor electro-optical systems are developing to improve capability of searching, detecting, classifying,
and identifying objects for target reconnaissance, surveillance and tracking purposes, particularly at night and in poor
weather conditions. The human operator is usually included in the loop using operator's console. The selection of the
operator's console structure and definition of the console functionality is governed by multi-sensor structure and
capabilities, mission scenario requirements and human operator's properties. The operator's console generalized
structure is defined and key component parameters and functions are discussed. Also, multi-sensor system generalized
structure is defined. To ensure optimal performance multi-sensor system requires an appropriate interface and control
through operators console. The design of the operator's console must realize the interaction between system
technological capability, operator performance, and system mission requirements. The cross-reference analysis of the
operator's console structure dependence on multi-sensor system structure and mission requirements is performed. The
some criteria for operator's console components requirements definition are discussed and derived.
Keywords:Multi-sensor system, operator's console, human computer interaction, display.
ergonomicallycaused limitations are successfully resolved
for system operational environment.

1. INTRODUCTION
The multi-sensor surveillance system is an adaptable
modular system for managing mobile as well, as
stationary sensors mounted at sensor head using human
observed command and control station. The main task of
control station - operator's console, is to work as an
ergonomic user interface and a data integration hub
between multiple sensors and a super-ordinate control
center. Because of that structure and functions
incorporated in operator's console could be complex and
demanding. In addition, operators need to take command
in selection of functions, analyze images and other data,
and make decisions.

Operator Machine Interaction studies [1-5] oriented more


to operator's console functions than design are based on
general approach for HMI - Human Machine Interaction
principles [6-12] that are more general, than applicable
for definition of the Operator's console designed. In this
article the same principles are applied but with more
specificities connected with surveillance system
application.
The generalized multi-sensor system architecture is
presented and analyzed. The operator's console
architecture is presented and basic functionalities are
identified. The display is identified as key component of
the operator's console and display basic requirements
criteria are derived. Using several cross-reference tables
the inter dependence of the system functionality and
operator's console structure is compared with basic human
machine interaction principles for defined application.

It is very important to understand all HMI - Human


Machine Interaction issues involved during multi-sensor
surveillance system application. This knowledge should
be incorporated in the system development phase. The
operator's console optimisation should be done by system
integrator starting from multi-sensor system structure,
mission scenario analysis and anticipated tasks analysis.
The operator's console optimisation could be done using
two approaches: (a) designing and manufacturing own
operator's console station suited to system software; (b)
use selected console from the market and optimize the
function through control software modifications. The
second approach is economically more beneficial and
could be successful in the case that key operator's

Display basic properties related to operator visual


perception of the video data are presented in order to
define the most critical for system functionality. The
operator's console optimization means selection of the
most suitable components and compromises that should
be done to provide the best available performances. The
display area size and resolution is the most critical among
other display parameters.
398

OTEH2016

MULTISENSORSYSTEMOPERATORSCONSOLE:TOWARDSSTRUCTURALANDFUNCTIONALOPTIMIZATION

Cooled or uncooled IR Imager (SWIR, MWIR,


LWIR)

2. MULTI-SENSOR SYSTEM
ARCHITECTURE

Position Sensing Group- Providesposition related data


using:
DMC - Digital magnetic compass - north sensor
GPS - Global positioning sensor using data from
satellite location systems
LRF - Laser range finder

The design of the multi-sensor surveillance system


depends on many multidisciplinary fields as imaging
sensor technology, image processing, position sensing
technology, motion control, communication and
networking technology. The selection of the surveillance
system components and defining system architecture is a
complex task. There is no universal solution for all
situation, but only optimized for aimed application.
Multi-sensor surveillance system
following key functionalities [13-16]:

should

Pan/tilt platform- provides sensor head motion and


aiming and position data in the same time. Should be
oriented and calibrated.

provide

Processing and communication Computer

Detection - Finding the object of interest - target, in the


observed scene, using reconnaissance and surveillance.

Operator's console- provides images presentation and all


necessary communication (command, and data) with
sensor head and others involved in surveillance process.

Positioning, Using sensor data and calculation algorithms


provide data of own and/or target position.

Power Supply- provides power necessary for system


operation.

Identification - all necessary data need to be provided for


further action.
Tracking- in the casethat targets moves or scene changes
providing thattarget continuously stays in the marked area
inside selected sensor field of view.
Image stabilisation - providingstable image in the case
ofdisturbing environment, using built in inertial, digital or
hybrid stabilisation techniques .
Content Handling- images and other sensor data
handling for the following processes.
reporting -preparing proper report to be transferred to
others
recording- recording images and data so they could be
used later.
situationalawareness- forming an alarm message using
processed data.

(a)

(b)

Picture 2.Multisensor system sensor head examples (a)


VMSIS-1,(b) VMSIS-2 (courtesyof Vlatacom DOO)

3. OPERATOR'S CONSOLE
ARCHITECTURE
Operator's console is the most important part of the multisensor surveillance system, providing integration of all
system parts and human operator.

The generalized multi-sensor system architecture


providing required functionalities is shown in Picture 1.

Operator's console generalized architecture is presented in


Picture 3. Example of the Operator's console is presented
in Picture 4.
Operator's console comprises of the parts listed below.

VisualDisplay- used for presentation of the visual


data (images, sensors status data), digital map,
functional buttons, etc. It could be composed from
one or more display panels and aimed for one
observer or multi observer operation, pending on the
task that should be accomplished.

Controls group: - provides controls for console or


system operation. It could consist of:
Sensor controls group
Display controls group (Luminance - brightness,
contrast, off-on)

Picture 1 Multi-sensor System generalized architecture


The multi-sensor system sensor head practical realisation
example is shown in Picture 2.
The multi-sensor system architecture comprised of
following components:

Imaging Group- Provides images of the space of interest.


Imaging sensors should have capability to select field of
view - FOV, focus control, calibrations. System could be
composed using any of listed below:
Day light (low light mono or color) video
399

Keyboard- provides generation of text messages that


are important for system operation. Keyboard could
be realized as separate hardware part or software
generated - virtual keyboard. In the case of virtual
keyboard button activation could be controlled by
cursor control device or by position sensing if

OTEH2016

MULTISENSORSYSTEMOPERATORSCONSOLE:TOWARDSSTRUCTURALANDFUNCTIONALOPTIMIZATION

position sensitive panel is integrated with display


panel.
Cursor control device- is used to control cursor
motion over screen. Mouse is the most common
solution.
Hand Controller (Joystick) -most effectively used for
pan/tilt motion control, but some other functionalities
couldbe added, pending on joystick design and
system requirements.
Functional buttons- aimed for fast switching to preprogrammed action. Could be realized as hardware
switches placed on the display bezel or console
command board. Also could be realized as virtual
buttons on the display screen activated by cursor, or
through position sensitive touch panel placed over
display screen.
Computer- provides environment for system
software activity and other functions. Usually
structured as standards PC architecture.
Digital map and symbologycould beincorporated
intosystem software package. In most cases
displayed on separate display panel, but could be
integrated into a single panel, pending on resolution
and operational requirements.
Software- provides system integration and all
important functionalities through pre-programmed
graphical user interface - GUI. The GUI content and
structure is closely connected with system functions
and operator's console structure and functionalities.

4. OPERATOR'S CONSOLE
FUNCTIONALITY
Function allocation in the system [12] is the key stepin
operator's console structure selection. The operator's
console task analysis process (function allocation and
human engineering factors determination is illustrated in
Picture 5.

Picture 5. System function allocation and human factors


engineering activities
Table 1. Cross reference table: Multi-sensor system basic
functionalities and Operator's console structure
Operator's Consolestructure

Multi-sensor
platform basic
functionalities
IR imager
Video cam.
LLLV
LRF
DMC
GPS
Pan/Tilt
Dig. Map
Commands
Calibrations
Controls
Communication

DIS.

F. Button
R
V

***
***
***
*
*
*
*
***
*
*
*
*

*
*
*
*
*
*
*
*
***
*
*
*

Keyboard
R
V

*
*
*
*
*
*
*
*
***
*
*
*

**
**
*
*
**

**
**
*
*
**

CUR.

JOY.

*
*
*
*

**
**
**
**

*
*
**
*
**
*

***
*
**
*
**

Table 2. Cross reference table: Operator's console


functionalities and Operator's console structure
Operator's Console structure

Operator's
consolebasic
functionalities

Picture 3. Operator's console generalized structure

Set-up
Security
Sensor calibration
Sensor command
- Focus
- Zoom control
Pan/Tilt
Dig. Map data
Sen. data display
Content handling
- recording
- reporting
- map symbols

DIS.

*
*
**
*
*
*
*
***
***
*
*
*
*

F. Button
R
V

*
*
*
*
*
*
*
*
*
*
*
*
*

*
*
*
*
*
*
*
*
*
*
*
*
*

Keyboard CUR.
R
V

**
**
*

**
**
*

*
*
*
*
*
*
**

*
*
*
*
*
*
**

*
*
**
**
*
*
*
*
*
*
*

JOY.

*
***
***
***
***
*
*
*
*
*
*

Operator's Console provides integration of human


operator and all system components, providing system
operation using following functionalities at least:

Picture 4. Operator's console example


(courtesy of Vlatacom DOO)

400

MULTISENSORSYSTEMOPERATORSCONSOLE:TOWARDSSTRUCTURALANDFUNCTIONALOPTIMIZATION

o Set-up and security protection


o System sensors calibration
o Sensor command (pan/tilt motion, optical focus, optical
zoom, camera mode, LRF activation etc.)
o Sensor data display (reporting request)
o Content Handling (recording, reporting, digital map
symbology)
The cross reference table showing what parts of operator's
console structure are used by multi-sensor system head
components is shown in Table 1. The cross reference
table showing the influence of the operator's console
components to the system functionalities is shown in
Table 2. These cross reference tables or similar designed
in accordance with system structure and purpose could
help to find optimized structure.

OTEH2016

The active area is the display surface where


theinformation content is presented. It is measured by
thediagonal (usually expressed in inches or mm), aspect
ratio (1:1, 5:4, 4:3, 16:10, 16:9).Also, important
parameter is number of pixels expressed as product
number columns (horizontal) and number of rows
(vertical). modern displays are built using some of
standardized pixel sizes (VGA -640x480, XGA 1024x768, or some other standardized value), called as
digital image resolution.
The starting point for determination of the display area
size and resolution is to determine required pixel density
necessary for proper visual perception of the expected
visual content. The next step is to determine the highest
digital pixel resolution of the images to be presented. the
display size and resolution should accommodate the
images and add some space necessary for functional
buttons and other selected command or data presentation
windows. In practice, it is rarely possible to get
everything on one screen so some design compromises
should to be done.

5. DISPLAY BASIC PROPERTIES


The display system is the final link between the gathered
data and the user. In the case of image presentation display
should be easy to read. If the display is not easyto see and
easy to understand, then that whole process is compromised.

Luminance, luminance - controls and dynamic range


The brightness of a viewed object is defined in
apsychological sense as a level of light intensity
perceivedby a viewer. The key physical measure of
brightness isluminance. Brightness is defined as the
luminance of thebrightest component (usually white color)
in the centre ofthe screen and is measured in candela per
square meter(cd/m2 = nit) or foot lamberts (1fL=3.426
nits).The typical display luminance varies from 100 nits
in theshadowed office environment up to 1000 nits in the
highambient illumination environment.
The other important consideration for displaytechnologies
is the luminance dynamic range (dimmingrange), i.e. the
ratio between the minimum and themaximum luminance
that can be generated and allowdisplay luminance to set a
value in accordance with humaneye accommodation
properties. Avionic displays have adimming range up to
1:200 in the day light operating modeand the same in the
night mode. The brightness change overthe dimming
range should usually follow a predefinedbrightness
control law to provide optimal visualinformation
reception.

Picture 6. Display visual environment and influences


Displaytype selection basiccriteria and selected properties
[17] for specific application in operator's console are
listed below:
Display resolution and size

Display contrast ratiois the ratio of the


maximumluminance to the minimum luminance that can
be generatedin the same image. Display Contrast is
created by thedifference in luminance from two adjacent
surfaces. It isrelated to the display image detail luminance
L and thebackground luminance Lb(usually defined as: (LLb)/Lb)).These parameters should be specified in a
predefinedillumination environment where ambient light
andreflections from the screen will significantly affect
thevalues.

The key measure for display quality, in accordance


withthe HVS acuity is the pixels density[18-20] expressed
in pixel perinch (PPI) or pixel per millimeter (PPMM).
Therequirements for display resolution depend on
theapplication through the anticipated observer to
displaydistance, where 300 PPI (12 PPMM) is defined as
best for
hand held device displays so-called
retinaldisplays for typical visual acuity limit and the
closest working distance of 250 mm, 170 PPI (7 PPMM)
is a visual acuity limit foravionic displays and 200 PPI (8
PPMM) is a goodapproximation of the HVS requirement
within
a
computergaming
display
application
environment.

A display may not be able to deliver a pure


blackbecause the technology applied leaks light or
reflectsambient light. A good display will offer a contrast
ratio thatexceeds 1000:1 in the dark, and will be able to
display anearly pure black. From the point of view of
the HVScontrast sensitivity, it is more than enough.

The surveillance system display resolution that is


sufficient for most tasks is about 100 PPI (80-120 PPI).

401

MULTISENSORSYSTEMOPERATORSCONSOLE:TOWARDSSTRUCTURALANDFUNCTIONALOPTIMIZATION

OTEH2016

The some other Human Machine techniques are studied in


the last decade like: gesture sensing, voice and other
sensory based techniques to provide faster operator's
response to data presented.

A contrast ratio higher than 5:1 is required as a


minimumfor image details detection in the high ambient
lightenvironment. In this case, black and other colors
arewashed up
There are a lot of other parameters that should be
considered [18-24] as listed below.
- Color gamut
- Sun Readability
- NVIS Compatibility
- Improved EMI shielding
- Controlled Viewing Angle
- Specific Video signal Interface
- High Reliability and long Lifetime
- BIT (Built In Testing) and Controls

Some of newly developed techniques already found place


in some applications, but their application in the serious
surveillance systems require more studies. This is similar
as in the case of touch screens.

7. CONCLUSION
Operator's console is one of the most critical components
in modern surveillance systems. Because of that the
definition of operator's console architecture and
functionality is one of the critical steps in the initial phase
of system development. There is a wide choice of existing
components that could be used in operator's console
allowing flexibility of operator's console design. On the
other hand, some of solutions are completely software
controlled so some deficiencies could be corrected using
something complicated software solution.

The touch panel possibilities [25] to recognize touch


position on the panel that overlay display surface and
provides screen content visibility gained a new interest to
use touch panel in connection with virtual functional
buttons or keyboard. Modern (projected capacitive touch
panel) are one step forward providing multi touch
recognition and even son gesture recognition. These
capabilities are effectively used in mobile devices, and
recommended for consideration in operator's console
displays. Some obscuration that they introduce to
displayed image and troubles with sun-readability are
consider as the most important disadvantages among the
others connected with human operator properties.

During past years it shows-up that some best results are


obtained in following solutions:
Use of multifunctional display
Use of large active area high resolution display
Use of well designed Graphical User Interface - GUI
(see Picture 7 as example)
Use of cursor control for activation of the functional
buttons
System components management using displayed
functional buttons.

Touch panel will find some effective applications but they


do not have capability to resolve all requirements, so one
should be very careful when plan to integrate touch panel
into operator's console.

Surveillance system using human operator require


carefully selected operators console. It is nearly
impossible to design universal operator's console for all
application. The key criteria of goodness should be
derived in accordance with mission tasks. Luckily, some
operator's console hardware solutions could be optimized
through properly designed GUI screens and sequences of
appear. This is particularly true in the case that display
panel have very good perceptual performances.

6. HUMAN COMPUTER INTERFACE


BASIC REQUIREMENTS
The Operator's console design should to be adapted for
best response taking care about [26 - 28] :
- Human operator properties(workload, speed of
reaction, operational properties)
- Visual - perceptual limitations are the key in imaging
system to provide proper circumstances for target
identification and timely action planning.
- Operator manipulations
- Mission and task requirements
- Mission environment influences (illumination
condition, temperature etc.).

References
[1] Kevin Baker and GordYoungson, Advanced
Integrated Multi-sensor Surveillance (AIMS) Operator Machine Interface (OMI) Definition Study,
NTIS ADA474282 - DRDC Atlantic CR 2006-242,
February 2007,
[2] K.K. Niall (ed.), Vision and Displays for Military and
Security Applications: The Advanced Deployable
Day/Night Simulation Project, Springer Science +
Business Media, LLC, (DOI 10.1007/978-1-44191723-2_1), New York, 2010
[3] National Research Council, Committee on Human
Factors, Commission on Behavioural and Social
Sciences and Education, Tactical Display for
Soldiers: Human Factors Considerations (panel on
Human Factors in the Design of Tactical Display
Systems for individual Soldier), National Academy
Press, Washington D.C., 1997

Picture 7: Graphical User interface example


402

MULTISENSORSYSTEMOPERATORSCONSOLE:TOWARDSSTRUCTURALANDFUNCTIONALOPTIMIZATION

[4] NATO, Defence Research Group, Improving


Function Allocation for Integrated Systems Design,
Technical Proceedings, AC/243(Panel 8)TP/7,
February 6, 1995

Verlag, Berlin Heidelberg, 2012


[18] Jeffrey Anshel, Visual Ergonomics - Handbook, CRC
Press, Boca Raton, 2005
[19] General Aviation Manufacturers Association,
"Recommended Practices and Guidelines for Part 23,
Cockpit/Flight Deck Design", GAMA Publication
No.10, 2000

[5] Jonathan Grudin, A Moving Target - The Evolution


of Human-Computer Interaction", a Chapter in
Human-Computer Interaction Handbook (J.Jacko Editor), Taylor & Francis, New York,2012

[20] Vicki Ahistrom, Bonnie Kudrick, Human Factors


Criteria for Displays: "A Human Factor Design
Standard Update of Chapter 5", Technical Report
DOT/FAA/TC-07/11,
Federal
Aviation
Administration, May 2007

[6] Neville Stanton, Alan Hedge, [..], Hal Hendrick,


Handbook Of Human Factors and Ergonomic
Methods, CRC Press, Boca Raton, London,
NewYork, 2005
[7] Andrew Sears, Julie A. Jack, Human Computer
Interaction - Fundamentals, CRC Press, Boca Raton,
2009

[21] International Committee for Display Metrology,


Information Display Measurement Standard, IDMS
Ver.1.03, Society of Information Displays, June 1,
2012

[8] MIL-STD-1472F, Human engineering design


criteria for military systems", US Department of
Defence, 1999

[22] Sig Mejdal, Michael E. McCauley, Dennis Bernger,


"Human
Factors
Design
Guidelines
for
Multifunction Displays", Report DOT/FAA/AM01/17, Federal Aviation Administration, October
2001

[9] Saleh, Mo Nours Arab, Human-Computer


Interaction: Overview on State of the Art",
International Journal on Smart Sensing and
Intelligent Systems, Vol1, No1, March 2008, pp. 137
-160

[23] Branko Livada, "Avionic Displays", Scientific


Technical Review, 2012,Vol.62,No.3-4,pp.70-79

[10] Alejandro Jaimes, NicuSebe, "Multimodal Human


Computer
Interaction:
A
Survey",
IEEE
International Workshop on Human Computer
Interaction inconjunction with ICCV, Beijing, China,
Oct.21, 2005
[11] NASA, Human Integration
NASA/SP-2010-3407, 2010

Design

OTEH2016

[24] Branko Livada, RadomirJankovi, NebojaNikoli,


"AFV Vetronics: Displays Design Criteria", Journal
of Mechanical Engineering (Strojnikivestnik ),
vol.58, No6, (2012), 376-385
[25] Michael Mertens, HermanJ. Damveld, Clark Borst,
"An Avionics Touch Screen based Control Display
Concept", Proc. SPIE Vol. 8383, "Display
Technologies and Applications for Defence, Security
and Avionics", 2012

Handbook,

[12] Dan Diaper, Neville Stanton (Editors), The


Handbook of Task Analysis for Human - Computer
Interaction,
Lawrence
Erlbaum
Associates,
Publishers, New Jersey, London, 2004

[26] Alejandro Jaimes, NicuSebe, "Multimodal Human


Computer Interaction: A Survey", IEEE International
Workshop on Human Computer Interaction
iconjunction with ICCV, Beijing, China, Oct.21,
2005

[13] Naem Ahmad, MatthiasO'Nils, NajeemLawal, "A


Taxonomy of Visual Surveillance Systems", Internal
Report, Mittuniversitetet, Sundvall, Sweden, 2012
[14] Jean-Yves Dufour, (Editor), Intelligent Video
Surveillance Systems, ISTE ltd, London,John Wiley
& Sons Inc. Hoboken, 2013

[27] FakhreddineKarray, MiladAlemzadeh, Jamil Abou


Saleh and Mo Nours Arab, " Human-Computer
Interaction: Overview on State of the Art",
International Journal On Smart SensingandInteligent
Systems, Vol. 1, N0. 1, March 2008

[15] A. NejatInce, Ercan[..], CevdetIsik, Principles


of Integrated Maritime Surveillance Systems,
Springer Science + Business Media, New York,
Kluwer Academic Publishers, 1998

[28] Florian Segor, Axel Brkle, Thomas Partmann,


Rainer Schnbein, "Mobile Ground Control Station
for Local Surveillance", 2010 Fifth International
Conference
on
Systems,
2010
IEEEDOI
10.1109/ICONS.2010.33

[16] Carlo S. Regazzoni, Gianni Fabri, Gianni Vernazza


(Editors), Advanced Video-based Surveillance
Systems, Springer Science + Business Media, New
York, Kluwer Academic Publishers, 1999
[17] Janglin Chen, Wayne Cranton, Mark Fihn (Eds.),
Handbook of Visual Display Technology, Springer-

403

STATIONARY ON-ROAD OBSTACLES AVOIDANCE BASED ON


COMPUTER VISION PRINCIPLES
MOURAD BENDJABALLAH
Ministry of defence of Algeria, PhD applicant at Military academy, University of defence, Belgrade, Serbia,
mourad.bendjaballa@hotmail.com
STEVICA GRAOVAC
University of Belgrade, School of Electrical Engineering, Belgrade, Serbia, graovac@etf.rs)
MOHAMMED AMINE BOULAHLIB
Ministry of defence of Algeria, PhD applicant at Military academy, University of defence, Belgrade, Serbia,
m.a.boulahlib@gmail.com
MILO MARKOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, Serbia, mdmarkovic@mas.bg.ac.rs

Abstract: In this paper, the classification of the on-road obstacles based on the processing of a sequence of images
obtained by a monocular camera embedded on a vehicle as well as the appropriate automatic guidance principle for
obstacles avoidance are presented. The typical road scenarios have been used as a testing environment for the overall
algorithm. Existing obstacles (vehicles) are classified into three classes: stationary, incoming, and outgoing. The first
task in the algorithm consists of obstacles detection over the road background. This is followed by their tracking from
one frame to another based on the appropriate selection of features using the SURF method. After that, the obstacles
are recognized in a new frame, where it is possible to determine their position from the camera and the relative velocity
using projection geometry principles. Then, the polynomial method is used in order to find the path that avoids the
obstacles. Synthetic and realistic video sequences are used during the tests.
Keywords: obstacles avoidance, obstacles classification, polynomial method, SURF, SVM.
related to the obstacles reduced to a specific object
(vehicle, pedestrian, etc). In this case, the detection can be
based on search for specific patterns, possibly supported
by features such as texture, shape [2], symmetry [3] or the
use of an approximate contour. The second category is
used when the definition of the obstacles is more general.
In this case, two methods are generally used: (1) The
usage of a monocular camera based on an analysis of
optical flow [4]. This method detects only the moving
obstacles and fails when obstacle has small or null speed.
(2) The method based on stereo vision [5,6]. Images are
captured using two or more cameras from different
angles. This method generally requires more time to do
the necessary calculations and it is sensitive to the local
motion of each camera caused by the vehicle movement.

1. INTRODUCTION
Self-anti-collision systems have been developed for
preventing traffic accidents and achieving safe driving.
This system should alert drivers of the presence of
obstacles and help them to react in advance. The safe
operation of a vehicle depends heavily on the vision. The
vision of a driver can be improved by systems that
provide information about the environment around the
vehicle that cannot be seen or barely seen by human eyes.
Therefore, an obstacle detection system based on machine
vision is the subject of current research in smart vehicle
technology. A completely autonomous motion control of
a vehicle requires very precise and reliable detection,
extraction, tracking, and classification of the on-road
obstacles into incoming, outgoing and stationary one.

Generally, a method for detecting both moving and static


objects simultaneously is required. Also use a monocular
camera is much better because of the economy aspect and
of processing time. In fact, the method using a monocular
camera carries out the treatment more easily in real time.

Obstacle avoidance is an essential function for any mobile


robot. Range sensors are the most commonly used devices
for detecting obstacles but the accuracy of sonars depends
on the angle of reflection and the material of the detected
object and laser range sensors are expensive and can be
harmful. More, both are an active sensor which is
undesirable for military applications for instance. On the
contrary, video cameras are now cheap, low consumption
and high resolution sensors. Different solutions exist; they
might be classified into two categories [1]: The first one is

In this paper, we introduce a vision algorithm for


detection, tracking and classification of the on-road
obstacles in Section 2. Then in Section 3 we present the
polynomial method used for the stationary on-road
obstacles avoidance. The paper is concluded in Section 4,
with some comments regarding the actual limits in
404

OTEH2016

STATIONARYONROADOBSTACLESAVOIDANCEBASEDONCOMPUTERVISIONPRINCIPLES

application and ideas about the future work.

2.2. Classification inside the "non road" region

2. THE VISION ALGORITHM

Picture 2.(a) shows the result of detecting the road region,


and the template image of the road region is shown in (b)
where the black pixels are representing the road. After
analysing a particular row in the image, one obtains a
profile as shown in picture 3. Based on this line profile,
white line segments that have two adjacent black
segments on both the left and right sides are the line
segments that belong to the road obstacles. By checking
each row in the template image of the road this
classification can be done. Picture 2.(c) shows the results
of this classification. Final representation of a search area
superimposed onto the original image is shown on (d).
Some of tall vehicles like trucks are not going to be fully
encompassed by this type of tracking window, some lowprofile cars would not fill the tracking window
completely, but the choice of a square-shaped tracking
window seemed as a reasonable compromise [7].

In order to detect the on-road obstacles, to track them, and


to determine their positions and relative velocities, the
following operations are employed. First, the road region
is detected using the SVM (Support Vector Machine)
classification method in order to distinguish class "road"
from the class "non-road". Second, the non-road region as
the result of this detection is classified into two areas:
obstacles and road environment. After the latter
classification, one has three types of regions:
environmental area, road region, and obstacles. The real
on-road obstacles as cars, pedestrians, boxes, etc. are
belonging to the class "obstacles". Monitoring each of
these obstacles is done by using the SURF matching
algorithm. The final step consists in calculating the
obstacles' positions in the field of view and the calculation
of their relative velocities in order to distinguish the static
and dynamic obstacles to avoid it in the next phase of our
algorithm.

2.1. Road region extraction

a)

b)

c)

d)

Picture 2. (a) the result of road region detection, (b) road


region template image, (c) the result of region
classification, (d) on-road obstacles detection.

Picture 1. Flowchart of the algorithm of road


segmentation.
The proposed algorithm is composed of five components
[7]. In the first feature extraction component, a feature
vector is extracted from each pixel of input image.
Second, the component of dynamic training database
(DTD) is filled with training set labeled by a human
supervisor in initialization and updated by the new
training set online. Third, the component of Classifier
Parameters Computing is used to estimate the parameters
in SVM classifier. The fourth SVM classifier component
is in charge of training and classification which takes the
training data and classifier parameters to train the SVM
classifier and use the trained SVM classifier to classify
image into road/non-road classes. The last component
contains two stages: Morphological Operation and Online
Learning Operation. The former implements connected
region growing and hole filling on the classification result
to determine the road region. The latter compares
morphological result and classification result to evaluate
the quality of current classifier, then select new training
set from that comparison and update the DTD.
405

Picture 3. Line profile of pixel intensity values (367th row


in the road region image).

2.3. Proposed Tracking Algorithm


As a natural candidate technique of tracking, we
considered the SURF (Speeded Up Robust Features)
algorithm [8], as a version reducing the computing time.
The SURF algorithm is composed of three consecutive
steps. The key-points are extracted from the rectangular
regions detected after the step described in Section 2.2.

OTEH2016

STATIONARYONROADOBSTACLESAVOIDANCEBASEDONCOMPUTERVISIONPRINCIPLES

These key-points are described as vectors in the


description step. The next step is the matching. Several
vectors from a database are matched against new vectors
from a new input image by calculating the Euclidian
distance between these vectors. This way the objects can
be recognized in a new frame. When the sufficient
number of matched points is found, particular obstacle is
marked as recognized and it is not tested more. Some new
obstacles would appear in this new frame and they will be
considered in the next one, while there would be the cases
that some of previously existing obstacles are now
vanishing or are not recognized. Unmatched obstacles
would be treated in the next frames as the new ones. The
pseudo-code illustrating this part of algorithm follows:

of the rectangle ABCD. This picture shows the result of


calculating scene depth using the information about
distance O E which is equal to one half of lane-width
and the position of the camera relative to the virtual point
G

O( R vector). The next step is calculating the distance to

each of the obstacles on the road by providing the virtual


point O using scene depth but here the unknown quantity
becomes O I M = O O 1 . Point O1 is the center of the
base of the green square (the result of algorithm described
in section 2.2). picture 5(b) illustrates this last step in
calculation of a scene depth to the point O1, while on
picture 5(c) the camera distance from the obstacle is
shown. Estimated relative position is (-29.38, -3.07, 1.51)
[m], while the real was (-30, -3.15, 1.5) [m], introducing
the relative error of (2.1, 2.5, 0.67) %.

Algorithmtraking_obstacles[9]

Begin
ExtractionofobstaclesfromtheframeN1;
fori=2:Numberofframesdo
ExtractionofobstaclesfromtheframeNi;
Forj=1:NumberofobstaclesinframeNi1do
Test=false;
Fork=1:NumberofobstaclesinframeNido
Test=surf(framei1,framei,obstaclej,obstaclek);
%searchiftheobstaclejintheframei1matches
%obstaclekintheframei.
Iftestthenobstaclek=obstaclej;
end
ifnon(test)thenobstaclelost;
end
Alltheobstaclesthatstayarenewobstacles;
end
end

-10.0243

-29.3798
R vector =

[O O1] = 19.5975 [m]

-3.0735

R vector =

-0.0023
1.5084

1.5084

[OE] = 1.5 [m]

O1

O1

D
O

E
B

a)

b)

c)

Picture 5. Illustration of cameras position reconstruction

2.5. Relative Velocities and Classification


To calculate the relative velocity of each on-road
obstacle, one should first calculate the relative positions
in consecutive time instants t and t + 1, using the relation:

Vob =

R ot b+ 1 R ot b
t

(1)

The Classification is done according to :

2.4. Position of an obstacle relative to the camera

Vrelative = VObstacle VCamera VObstacle = Vrelative + VCamera

zI

Scene depth R =10.1373 [m].

Scene depth RO1 =29.5771 [m].

VObstacle = 0 stationary _ vehicle


VObstacle < 0 incoming _ vehicle
VObstacle > 0 outgoing _ vehicle

xI

yI

C
OI

2.6. Experimental Results

Picture 4. Position reconstruction procedure. [10]


To calculate the position of an on-road obstacle relative to
camera mounted on the moving vehicle, one should apply
the principle illustrated in picture 4. This principle is
illustrated here on example of synthesized sequence of
images.
The reference rectangleABCD now is as shown in picture
5 (a) with a priory known the information: (AD) // (BC),
(AB) // (CD) and the width of the lane (3 m in this
example). Vanishing point P is the intersection ofADand
BC,Qis the intersection of ABandDC, and Ois the center

406

Picture 6 shows the experimental results of different steps


from the proposed algorithm. The first row shows the
results of the input image classification into the road
region (green area) and the non-road region. The second
row represents the real detected on-road obstacles
surrounded by green tracking windows (green squares).
The third row provides data about the tracking of these
detected on-road obstacles from one frame to another,
where an interesting number of matches associated to
each obstacle can be seen. Furthermore, the fourth row
gives the calculated position of each obstacle using
different geometric projection principles. Finally, the last
row depicts the estimated distance results from the camera
to each on-road obstacle (left side) and the classification
results of these on-road obstacles based on relative
velocities of each of them (right side).

OTEH2016

STATIONARYONROADOBSTACLESAVOIDANCEBASEDONCOMPUTERVISIONPRINCIPLES

FrametFramet+1Framet+2Framet+3Framet+4

Estimated Distance X
The X coordinate [m]

35
30
25

black vehicle
blue vehicle

20
15
10
5

0.5

1.5

2.5

3.5

Time [s]

Picture 6. Classification of obstacles (real scenario)


When there are no other vehicles in the adjacent lane, two
5th polynomials one for lateral displacement and one for
longitudinal displacement was designed based on the
criteria of minimum total kinetic energy during the lane
change maneuver with constraints on both lateral
acceleration and minimum jerk to ensure the rider's
comfort. The polynomial can be derived by having three
constraints at the initial point and three constraints at the
end of the lane change maneuver. Those constraints are
displacement, velocity and acceleration at initial and final
point in terms of lateral and longitudinal directions. For
simplicity, two assumptions are made. One is that the
acceleration in both directions at initial and final state is
zero. Also, it's assumed that ego vehicle's velocity is the
same at the starting point of the maneuver and the end of
the maneuver.

3. STATIONARY ON-ROAD OBSTACLE


AVOIDANCE

Picture 7. Lane change for stationary on-road obstacle


avoidance.
d-

total longitudinal distance traveled during


the lane change maneuver.

T-

total time used for lane change.

w-

the width of the lane.

vstate.

velocity of the ego vehicle at inital and final

Amax -

the maximim lateral acceleration of the


ego vehicle.

3.1. Calculation of trajectory parameters


Longitudinal displacement:
x(t) = 5t 5 + 4t 4 + 3t 3 + 2t 2 + 1t 1 + 0

(2)

y(t) = 5t 5 + 4t 4 + 3t 3 + 2t 2 + 1t 1 + 0

(3)

Lateral displacement:

407

OTEH2016

STATIONARYONROADOBSTACLESAVOIDANCEBASEDONCOMPUTERVISIONPRINCIPLES

Now, those six coefficients need to be calculated using six


constraints. First, the boundary conditions for longitudinal
direction is,
x0 x0
X 0 = x0 = vx0


x0 ax0

xf xf
0
= v , X = x = v
f
f x f

x f ax f
0


3.2. Trajectory Optimization

d

= v (4
0

x(0) = x0 0 = x0 = 0
x (0) = v 1 = vx = v
0


x(0) = ax0 2 2 = ax0 2 =

2 a x0

=0

x(T ) = 5T 5 + 4T 4 + 3T 3 + 1 2 ax0 T 2 + vx0 T 1 + x0 = x f (5)


x (T ) =5 5T 4 + 4 4T 3 + 3 3T 2 + ax0 T + vx0 = vx f

x(T ) =20 5T 3 + 12 4T 2 + 6 3T = ax f

For easier calculation, eq.5 should be formulated into a


matrix representation. And based on the boundary
condition, further simplification can be made

Fd = 1 m ( v f vi )
2

T3 T4
T 5 3
3T 2 4T 3 5T 4 =

4
2
3
6T 12T 20T 5
x f x0 vx0 T 1 2 ax0 T 2 d vT

vx f vx0 ax0 T
=
= 0
0


ax f

v
1

2
10
= 3 ( d Tv )

T
3 15

4 T 4 ( d Tv )

5 65 ( d Tv )
T

J =

J =

The same procedure can be applied to derive coefficients


of the trajectory along the lateral direction. The boundary
conditions for lateral direction is,

x(t ) = vt + 10.(d - Tv)


y (t ) =

w
2

3
+ 10.w t 3
T

-15.(d - Tv)
3

4
-15.w t 4
T

5
+ 6.w t 5
T

t4

w

= 0 (8)
0

T4

+ 6.(d - Tv)

+ y 2 dt

(10)

10
w 2 + (d vT
7T

)2 ) + 2v (d vT ) + v 2T

(11)

The maximal acceleration or deceleration should also be


bounded in order to ensure the driver comfort while the
lane change. From 9, the vehicle acceleration can be
represented as,
f a ( t ) = 
x 2 + 
y2
t 2 + 1205 t 3 ) +
( d Tv )2 ( T603 t 180
T4
T
2

t5

T5

( x

With the cost function formulated, the acceleration


constraint mentioned above of the optimization problem
need to be considered.

Now, the longitudinal and lateral trajectory while lane


changing can be represented as,
t3

(9)

Substitute both equation 9 in to the cost function and with


some number manipulation can represent the cost
function as the following

(7)

0
yf yf
= 0 , Y = y = v
f

f y f
0
y f ay f



where is F the extra force needed to accelerate the


vehicle to vf , which is the final velocity at the end of the
maneuver and vi is the initial velocity. d is the overall
distance traveled during the lane change maneuver. Since
the mass of the vehicle is a constant, then, smaller
velocity implies smaller external forces needed. Thus, the
cost function is formulated as,

(6)

From the above calculation, all coefficients can be


represented as function of d, v and T

y0 y0
Y0 = y 0 = v y0


y0 a y0

Since this is the desired trajectory designed for the


controller to follow, an optimization process is preferred
to determine these two parameters. If the criteria of this
lane change maneuver is to minimize the overall
maneuver time with fixed constraints meaning that to
formulate a time-optimal control problem with bounded
lateral acceleration and lateral jerk, then apply the
Pontryagin Principle to solve the Hamiltonian for the time
optimal problem is a good choice [11]. However, time
optimal of the lane change maneuver requires the driver
or the controller to apply maximum lateral acceleration
through the process to finish the lane change maneuver,
which will cause discomfort to the passenger in the
vehicle. Thus, here, the cost function to minimize the total
kinetic energy is used [12] and taking the acceleration
constraint into account. In this case, not only driver
comfort is considered, but also using the minimal energy,
for example fuel energy to finish the lane change
maneuver. The relationship between the extra force
needed for a lane change and the extra kinetic energy is

(9)

60 w
t
T3

1804w t 2 + 1205w t 3
T

(12)

The maximum acceleration should be bounded to ensure


the riding comfort. And it can be shown that the
maximum acceleration can be represented as,

From the equation for both trajectories, besides the


parameter v is pre-defined, the other parameters d and T
are still to be determined.

408

OTEH2016

STATIONARYONROADOBSTACLESAVOIDANCEBASEDONCOMPUTERVISIONPRINCIPLES

f a _ max (t ) =

10 3
3T

w 2 + (d vT

)2

with high enough velocity, the optimal solutions for the


problem can be obtained from

(13)

d * 2.4v

Thus, the overall optimization problem for the lane


change trajectory with two optimization variables T and d
can be formulated in this way

10
2
min J =
w 2 + (d vT ) + 2v (d vT ) + v 2T
7T
(14)
10 3
2
2
s.t.
w + (d vT ) = A max
3T 2

w
A max
3
2

A max
v2

(15)
w
+ 2.4
A max

3.4. Simulation of trajectories generation for


stationary on-road obstacle avoidance
A sample lane change trajectory is simulated here using
the fifth order polynomial and both the total time of lane
change T and the overall distance traveled while lane
changing d are calculated using the numerical
approximating method. In this example, vehicle is driving
at vx = 20 and distance traveled is d = 49.18m. The
following graph shows the generated trajectory as well as
it's velocity and acceleration.

3.3. Numerical Approximating of T* and d*


With the optimization problem formulated and convexity
of the cost function proved, now the solution of the
problem need to be found. Here, since an analytic solution
is very difficult to find, the numerical method together
with the optimization software Lingo proposed in [12] is
used. And based on the results, it can be concluded that

Picture 8. Lane Change Trajectory 5th order.

Picture 9.b Lateral Velocity and Acceleration of


the Trajectory 5th order

Picture 9.a Longitudinal Velocity and


Acceleration of the Trajectory 5th order

4. CONCLUSION

The polynomial method is used for the stationary on-road


obstacles avoidance to ensure the rider comfort, and to
minimize the total energy during the maneuver. Two
parameters are considered: T which is the total time
required to finish the maneuver and d which represents
the overall traveled distance.

The algorithm of automatic classification of on-road


obstacles according to their relative velocities is a
prerequisite for application of an automatic control
system for obstacle avoidance. The on-road obstacles
have to be detected, then described properly in order to
enable their tracking from frame to frame. Our choice on
these two steps have been oriented toward rather complex
methods: SVM based on eight component vector (color +
texture) for the recognition of road area, SURF based on
64 component vector for description and tracking of keypoints inside the tracking windows and some of geometry
projection principles are used to determine the position
and relative velocity of each on-road obstacle. These steps
are verified using the realistic road-traffic images.

References
[1] Demonceaux C, Kachi-Akkouch D. Robust obstacle
detection with monocular vision based on the motion
analysis. In: Proceedings of the IEEE intelligent
vehicles symposium; 2004.p. 527-532.
[2] Broggi A, Bertozzi M, Fascioli A, Sechi M. Shapebased pedestrian detection. In: Proceedings of the
IEEE Intelligent vehicles symposium; 2000.p.215220.
409

STATIONARYONROADOBSTACLESAVOIDANCEBASEDONCOMPUTERVISIONPRINCIPLES

[3] Teoh S, Brunl T. Symmetry-based monocular


vehicle detection system. In: Proceedings of Machine
Vision and Applications; Vol 23; 2012. p.831-842.
[4] Braillon C, Pradalier C, Crowley J, Laugier C. Realtime Moving Obstacle Detection Using Optical Flow
Models. In: Proceedings of the IEEE Intelligent
Vehicle symposium; 2006.p.466-471.
[5] Bernini N, Bertozzi M, Castangia L, Patander M,
Sabbatelli M. Real-time obstacle detection using
stereo vision for autonomous ground vehicles: A
survey. In: the IEEE ITSC; 2014.p. 873878.
[6] Labayrade R, Aubert D. Robust and fast stereo vision
based obstacles detection for driving safety
assistance. In: Proceedings of machine vision
applications; 2004. p.624-627.
[7] Bendjaballah M, Graovac S.One Approach to
Detection and Extraction of On-road Obstacles
Based on Image Processing. In: Proceedings of
robotics from the Alpe-Adria-Danube Region
RAAD;2016.
[8] Bay H, Ess A, Tuytelaars T, Van Gool L. Surf:
Speeded up robust features. In: Proceedings of

OTEH2016

Computer Vision and Image Understanding; 2008. p.


346-359.
[9] Bendjaballah M, Graovac S. A Tracking of On-Road
Obstacles and Classification Regarding to their
Relative Velocities; In: Proceedings of Electrical,
Electronic and Computing Engineering IcETRAN;
2016.
[10] Kanatani K. Geometric Computation for Machine
Vision, Oxford University Press; Oxford, U.K. June
1993. p. 15-51.
[11] Hatipoglu C, Ozguner U, and Keith A Redmill.
Automated lane change controller design. Intelligent
Transportation
Systems,
IEEE
Transactions
on,4(1):13-22, 2003.
[12] Tzila Shamir. How should an autonomous vehicle
overtake a slower moving vehicle: Design and
analysis of an optimal trajectory. Automatic Control,
IEEE Transactions on, 49(4):607-610, 2004.

410

GPS AIDED INS WITH GYRO COMPASSING FUNCTION


IVANA TRAJKOVSKI
Military Technical Institute, Belgrade, e-mail: vivanant@eunet.rs
NADA ASANOVI
Military Technical Institute, Belgrade, e-mail: anamilnr@open.telekom.rs
VLADIMIR VUKMIRICA
Military Technical Institute, Belgrade, e-mail:div1@ptt.rs
MILAN MILOEVI
Military Technical Institute, Belgrade, e-mail: marija.m@beotel.net

Abstract: This paper presents realized design and teseting results of inertial navigation system (INS) with gyrocompassing
(azimuth determination) function. The system is aided with GPS receiver. It consists of three fiber optic gyros, three
accelerometers and block of electronics. Both hardware and software solution for special purpose navigation computer is
described. Testing procedures as well as testing results of INS gyrocompassing mode are given. Realization of INS model
with gyro compassing function is a significant achievement in this kind of technology with possibility for further
optimization and analysis.
Keywords: inertial navigation system (INS), gyro compassing, azimuth, FOG, GPS, special purpose navigation compute.
r

1. INTRODUCTION

ADVANS URSA (manufacturer iXBlue) position


accuracy 0.5% DT, heading accuracy 4mils [3]

Inertial navigation system (INS) is a device that provides


position and orientation data of the object it is mounted on.
It consists of inertial measurement unit (IMU) with inertial
sensors (gyros and accelerometers), special purpose
computer for performing data acquisition and navigation
algorithm and control and display unit (CDU) for
interfacing the operator. INS can have different modes of
operation and aided devices (such as GPS, magnetometers,
velocity measuring devices etc.) in order to improve the
performance.

LN100 (manufacturer Northrop Grumman) position


accuracy 0.8nm/h and heading accuracy 0.8mils. [4]
We can also underline that realization of INS with gyro
compassing function combines utilization of very accurate
sensors and rather demanding algorithm. The price of
sensors and systems increases with accuracy and sometimes
it is even very hard to obtain the system with specific
characteristics.
This paper presents description of the model of INS that is
completely realized in MTI (laboratory of inertial sensors
and navigation systems) including hardware and software
design, assembling, testing and analysing of testing results.

In the previous period a model of inertial navigation system


(INS) was designed and produced in MTI. This paper
presens realized design of this INS. The system has three
working modes: gyro compassing (azimuth determination),
feedback and navigation mode. It is suitable for various
applications. The system is designed as GPS aided system.
GPS data is used for initialization, azimuth measurement
and improving accuracy process in navigation mode.

2. MATHEMATICAL MODEL
2.1. Gyro compassing function (algorithm)

Both hardware and software solutions of INS are designed


and realized in MTI and also all the testing procedures were
defined and done.

This part of function is performed in stable state (without


any movement) of the object where INS is mounted.
Gyro compassing (azimuth deterination) algorithm is based
on measuring signals from three gyros and three
accelerometers that are mounted on a triedar. They define
three orthogonal axes OXb, OYb and OZb that refer to
sensors' coordinate system. Coordinate system OXg, OYg
and OZg represents navigation coordinate system refered to
the Earth (called NED). OXg and OYg axes are in horizontal
plane and OZg axis is oriented vertical and down. OXg axis

Inertial navigation is very complex and importanr area and


we can mention some solutions of INS with gyro
compassing function such as:
ADVANS VEGA (manufacturer iXBlue) position
accuracy 0.1% DT, heading accuracy 0.3mils [3]
ADVANS LYRA (manufacturer iXBlue) position
accuracy 0.3% DT, heading accuracy 1mils [3]
411

OTEH2016

GPSAIDEDINSWITHGYROCOMPASSINGFUNCTION

Equation (1) can be analyzed in order to evaluate effects of


gyros and accelerometers characteristics separately. Table
1 represents these calculated effect to the error of azimuth
determination.

is oriiented to the north. Relation between those two


systems is defined with Euler angles - azimuth, elevation and

- sidewise slope (see Picture 1).

Table 1. Azimuth determination errror

rad
0.00940056
0.00936013

Total Error
Gyro effect
Accelerometer
0.000040428
effect

deg
0.538612
0.536296

mils
9.575333
9.534153

0.002316

0.04118

Besides the accuracy of azimuth determination, important


factor is required time of measurement. The system and
algorithm require settling time after the power on of the
system because of filtering implementation for the signals
from the sensors.

2.2. Navigation algorithm


This part of software application is designed according to
defined mathematical model for navigation and refers on
determination of position and orientation of moving object.

Picture 1. Euler angles


Gyros and accelerometers measure the components of Earth
rotation angular rate ( ) and gravity acceleration (g).

The application is performed in time defined loop and this


time interval represents the sampling time for the algorithm.
The input values for navigation function are the components
of angular rate and linear acceleration that are converted
from sensors to geographic coordinate system. The
components of linear velocity and position progress as well
as orientation angles are calculated in each iteration of
program application.

Implemented azimuth algorithm calculates azimuth angle


based on measured sensors' signals. Accuracy of
azimuth detemination directly depends on sensors' accuracy.
The basic paramether that defines class of accuracy is bias
instability: Dy for gyro and By for accelerometer.

The following equation (1) defines accuracy of azimuth


determination based on sensors' characteristics [1, 2, 5].

Dy
By tan L
+
cos L
g

In order to improve navigation performance, GPS receiver


with two antennas is used as aided device. GPS data are
used in initialization phase and as estimation values in
implemented Kalman filter.

(1)

2.3. Feedback algorithm


Sensors that are assembled in this model of INS have the
following characteristics: Dy = 0.1 / h = 0.001745rad/h

This part of software application is designed for feedback


application of INS data and it can also be combined or
compared with other sensors data if they are mounted on
the system.

and By = 4 104 m/s 2 .


Parameter Dy is given in gyro producer's datasheet and
parameter By is determined from acquisition files using the
method of Alan variance because the accelerometer's
producer did not declare this value.

3. INS DESIGN
INS consists of:
basic INS that consists of inertial measurement unit
(IMU) with three fiber optic gyros (FOG) and three
accelerometers and electronic block with special purpose
computer, inerface electronics and supply unit.

L is geographic latitude and can be obtained from GPS


device or manually initialized. Geographic latitude has the
following value for the measurement that is performed in
MTI: L = 44.741 .

GPS receiver with two antennas

Earth rotation agular rate and gravity acceleration have the


= 7.292115 105 rad/s
and
following
values:
2
g = 9.805066 m/s .
According to given data, the expected value of standard
deviation of azimuth determination is:

Control and display unit (CDU) in the vehicles cabin


The Picture 2 represents block scheme of INS with all the
assembled components .

= 0.539D = 9.575 mils

412

OTEH2016

GPSAIDEDINSWITHGYROCOMPASSINGFUNCTION

CDU

SUPPLY UNIT

GPS ANNTENAS
INTERFACE

GPS
RECIEVER

BASIC
INS

VMS
(optional)

Picture 4. Accelerometer

INS

Associated electronics for IMU block is based on


microcontrollers platform with analog to digital converter
for acceleration measurement and RS485 interface for
communication with gyros.

Picture 2. Block scheme of INS

3.1. Inertial measurement unit (IMU)


IMU is a part of basic INS and it consists of three fiber optic
gyros (model SRS 1000) three accelerometers (model A15)
and associated electronics.

Microcontroller's application for IMU aquisition is


performed in time defined loop. It consists of part for UART
(RS485) communication using protocol SSP 2.0 that is
defined by producer. Calculating of angular rate is done
using IEEE754 protocol.

Gyro and accelerometer characteristics are presented in


catalog datasheet [1] and [2].

Measuring of linear acceleration is performed by measuring


of analog voltage drop on precise resistor for each
accelerometer using analog to digital conversion (ADC).
Communication with ADC is done using SPI hardware and
protocol.

Gyro (Picture 3) characteristics:


Measuring range

90/s

Bias instability in temperature range from


-30C to 50C
0.1/h
Scale factor accuracy

0.01%

Bandwidth

100Hz

4. TESTING RESULTS

Random walk

0.001/h

Power supply

5V, 6W

Interface

4.1. Laboratory testing results


After assembling the model of INS, laboratory tests were
done. These testings were done in the laboratory for inertial
sensors and systems in MTI, on two axes test table.

RS485, protocol SSP 2.0

The first part of testing is testing the sensors characteristics.


The Pictures 5 and 6 represent diagrams of Allan variance
both for accelerometers and gyros.

Picture 3. Fiber Optic Gyro

Accelerometer (Picture 4) characteristics:


Acceleration measurement range
Scale factor
Scale factor variations

20g
1.2 0.2 mA/g
< 200 ppm

Operating temperature range

-40+75C

Bias

0.012g

Power supply voltage


Base plane misalignment

Picture 5. Diagram of Allan variance for accelerometer

15V
0.006g

413

OTEH2016

GPSAIDEDINSWITHGYROCOMPASSINGFUNCTION

without any disturbance.

0.001

Bias instability for gyro that was determined by Allan


variance while measuring in simillar, stable temperature
conditions was aproximately:
0.0001

[deg/s]

B = 8 106 / s = 0.0288 / h

0.00001

For this value of bias instability standard deviation of


azimuth determination is:

= 2.787mils
This value is closed to the value that was experimentaly
measured.

0.000001
0.01

0.1

10

100

t [s]

Testing in feedback mode was also done in laboratory


conditions. INS was placed on two axes test table and
position commands were done for both axes in order to
measure both azimuth and elevation displacement.

Picture 6. Diagram of Allan variance for gyro

Determination of bias instability for accelerometers was


important bacause this data was not given by producer.

Graphical presentations of the results are given in the


Pictures 8 and 9.

Calculated bias instability for gyro is even better than that


one given by producer. The reason for this is stable
environment temperature when tests were done and also
stable platform (test table) where INS was mounted.

0.08

0.06

0.04

The next part of testing is measuring azimuth determination


error of INS.
error (degree)

0.02

Measurements were done in 36 positions. These positions


differ from each other in 10. Five measurements were done
in each direction. This error determinaton refers to relative
azimuth positons (difference between two positions).

0
1

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36

-0.02

-0.04

-0.06

-0.08

Graphical presentation of the results is given in the Picture 7.

-0.1

-0.12

15

number of measurement

Picture 8. Diagram of azimuth measurement error


(degrees)

10

0.25

error (mils)

0.2

0
0

50

100

150

200

250

300

350

400

0.15

error (degree)

azimuth error
-5

0.1

0.05

-10
0
1

-0.05

-15

position (degree)
azimuth error

-0.1

mean value of az. err.

number of measurement

Picture 7: Diagram of azimuth determination relative


error (mils)

Picture 9. Diagram of elevation measurement error


(degrees)

According to these results, the mean value of error of


azimuth measurement is in range of 0.4 (7.1 mils). If we
assume that we are going to get normal distribution with
increasing of measuring numbers, we can define standard
deviation as =2.3mils. This value is even better then the
value that is theoreticaly calculated. The reason is that the
measurement took place in stable temperature environment,

4.2. Azimuth determination on vehicle platform


using INS
This chapter describes azimuth determination testing when
INS is mounted on some vehicle's platform. The testing is
done in the area where very accurately defined value of
414

OTEH2016

GPSAIDEDINSWITHGYROCOMPASSINGFUNCTION

direction exists. The referent axis of platform is routed in


this known direction. INS is mounted in that way that its
referent axis is parallel to referent axis of the platform.
Missalignement between these two axes represents the error
missalignement of INS mounting. Missalignement error is
determining through measuring procedure and it can be
corrected in software application.

The limits of error are 30mils. That means that we can


define standard deviation =10mils. This value corresponds
to the value that was theoretically calculated.

4.3. Azimuth determination of vehicles axis


using GPS
The character of azimuth data given by GPS is shown on the
Picture 12.

Targeting optical aperture is mounted on platform in order


to perform rectification.

206,35

Value of azimuth angle of the direction can be calculated


using method of calculating azimuth between two points of
known position. The first point is the platform. Accuracy of
determining azimuth value depends on distance between
two points. In order to improve accuracy, it is good to have
rather great distance. In these measurements, azimuth of
direction is defined in geographical coordinate system.
Azimuth determined by GPS device is defined in the same
system.

206,3

azimuth GPS []

206,25
206,2
206,15
206,1
206,05
206

Measurements are done for the different positions of the


platform.

50

100

150

t [s]200

250

300

350

Picture 12. GPS measurement of azimuth

Azimuth determination process performed by INS requires


some time. Accuracy depends on calculating time interval.

There is a difference between signal character of GPS and


INS measuring of azimuth (see Pictures 10 and 12). For the
purpose of azimuth datermination by GPS, signal filtering
or calculating the mean value must be done.

In these measurements reading azimuth determination result


was done after 3 minutes of data acquisition.
The next Picture 10 presents an example of the character
of changing the value of azimuth angle in function of time.
The result of measuring the angle is approaching to some
value with time.

The Picture 13 presents standard deviation for azimuth


determination by GPS.
16

198.15

14

198.05

12

198

std dev [mils]

INS azimuth []

198.1

10

197.95
197.9
197.85
197.8
197.75
0

20

40

60

80

100

120

140

8
6
4

t [s]
2

Picture 10. Diagram of azimuth approaching value


(degrees)

0
0

100

200

300

400

direction [deg]

The results (error presentation) of all measurements that


were performed in field environment are presented on
Picture 11 .

Picture 13: Standard deviation of GPS measurement of


azimuth

40

Based on 24 measurements of GPS azimuth determination,


standard deviation is given as:

INS error[mils]

30
20

= 2.89mils

10
0
-600-10

400

1400

2400

3400

4400

5400

6400

The problem with GPS measuring of azimuth is that device


sometimes gives the value of azimuth that is very far from
accurate value. Because of that, it is very important to
combine measuring of azimuth by INS and GPS. If the
values are closed to each other, we can take GPS value as
accurate.

-20
-30
-40

direction [mils]
Picture 11. INS measurement of azimuth error
415

OTEH2016

GPSAIDEDINSWITHGYROCOMPASSINGFUNCTION

The Picture 14 presents comparig results of error evaluation


of INS (blue) and GPS (pink) azimuth determination.

400

300

30
200

20
INS
GPS

100

deviation [mils]

10
0
-50

0
-1600

400

2400

4400

50

100

150

200

250

300

350

400

6400
-100

-10
-200

-20

Picture 15: INS and GPS navigation mode


-30

5. CONCLUSION

-40

direction [mils]

This paper presented implementation of GPS aided INS


with azimuth determination function. All three working
modes of INS were shortly described.

Picture 14. INS and GPS measurement of azimuth errors

The greatest achieve is realization of such device itself and


defining measuring azimuth method using both GPS and
INS.

4.4. Testing in navigation mode


Navigation mode calculates position data and orientation
angles of vehicle while performing mission movement.

If both measured values are closed to each other, we can


take GPS value as accurate because in this system GPS
azimuth measuring is more accurate.

The Picture 15 presents the position diagram both for GPS


and INS data.

In GPS fault signal circumstances his azimuth value differs


very much from accurate value. This case is defined when
GPS and INS azimuth measurement values are not close to
each other.

When combining these two devices, the accuracy is defined


by GPS accuracy.
If GPS signal is valid, the position and azimuth
determination accuracy have the following values:

In that case, we can repeat the measurement or we can use


topographic map for position initialization and INS
measurements with defined accuracy described above.

position _ accuracy = 0.4m

azimuth _ accuracy = 2.89mils

References
There are various methods to improve inertial only mode
accuracy but they are not the matter of this paper.

[1] Catalog datasheet for gyro SRS1000, SINGLE AXIS


PRECISION FIBER OPTICAL GYROSCOPE SRS1000 -Production Company 'Optolink'
[2] Catalog datasheet for accelerometer A-15
[3] iXBlue datasheet, ixblue-br-def-land-03-2015-web
[4] Northrop Grumman datasheet, LN100g
[5] D.H.Titterton, J.L.Weston, Strapdown Inertial
Navigation Technology, Copublished by American
Institute of Aeronautics and Atronautics USA and the
Institution of Engineering and Technology UK, 2009.

416

MODERNIZATION OF THE RADAR P12


VERICA MARINKOVI NEDELICKI
IRITEL a.d., Belgrade, verica.presa.lebl@iritel.com
BRANISLAV PAVI
IRITEL a.d., Belgrade, bane.presa.lebl@iritel.com
BORIS MIKOVI
IRITEL a.d., Belgrade, boris.presa.lebl@iritel.com
MLADEN MILEUSNI
IRITEL a.d., Belgrade, mladenmi.presa.lebl@iritel.com
PREDRAG PETROVI
IRITEL a.d., Belgrade
ALEKSANDAR LEBL
IRITEL a.d., Belgrade
DRAGAN BORJAN
MO VS, Belgrade, borjandr@gmail.com
DEJAN IVKOVI
Military Technical Institute, Belgrade, dejan.ivkovic@vti.vs.rs
DRAGAN NIKOLI
Military Technical Institute, Belgrade, dragan.nikolic@vti.vs.rs

Abstract: In this paper we present modernization of radar P12, which is used in ARJ for PVD of Serbian Armed Forces.
Modernization includes digitalization of some radar blocks: radar receiver VHF DP/P-12, radar indicator (DiRI) and radar
data extractor (RDE), using the principles of software radio. Besides these, the functions of the module for tracking moving
targets and control of antenna position are improved. The new functions, which were not available in the original radar
version, are realized: remote control and supervision of radar functions and radar connecting in integral telecommunication
system of Serbian Armed Forces. In this way, technical and tactical characteristics of P12 radar are improved and extended
its application in our armament.
Keywords: radar modernization, digitalization, software radio, integral telecommunication system.

1. INTRODUCTION
Modern development of military forces in the world is
characterized by the constant adoption of new weapons and
new technical resources. This progress is permanently
accelerating, and following of this trend requires more and
more investment. In circumstances when the Serbian Armed
Forces, on the one hand, do not have sufficient material
resources, and on the other hand needs to keep pace with other
armies in the world, the logical decision is to modernize
existing weapons and military equipment. Therefore, the
modernization of existing equipment is the primary objective of
ARJ and PVD service in Serbian Armed Forces.
Radar is an important element of the system for
surveillance, data collection, presentation of the situation in
the airspace and fire management of the armed forces.
Radar systems, based on the surveillance and acquisition
417

radar P-12, are still used in the defence system of Serbian


Army. The experience of the war activities have showed
that these radars are successful in detecting aircrafts with
reduced reflex surface (planes made in the "stealth"
technology), along with a reduced ability to detect radar
radiation using antiradar missile. Due to all these aspects it
is of great importance to maintain as long as possible the
functionality of this radar in the defence system.
Within the Institute IRITEL great effort is devoted to the
radar systems development, based on the use of modern
technology [1] - [3]. Besides, IRITEL, in cooperation with
the Military Technical Institute and the Ministry of Defence,
at the right time recognized needs and perspectives of
modernization of existing older generation radar systems
[4]. Modernization of the existing radar system is also
implemented in the world [5], although this process still
refers to newer generation radars (P-18 and P37).

OTEH2016

MODERNIZATIONOFTHERADARP12

The radar modernization involves two groups of activities.


The first group consists of improving existing features,
which is achieved by digitizing existing functions on
modern principles. The second group of activities is the
realization of new, advanced functions that were not
available in the old radar version. All this together assures
that the radar P-12 can be included within the modern
tactical radars of the latest generation.

Digital radar indicator, radar data extractor and moving


target indicator are realized as software modules that run on
a PC under the commercial operating system.

2.1. Digital radar receiver


Radar receiver in P-12 radar operates in frequency range
from 150MHz to 170MHz. As part of the modernization,
digital radar receiver VHF DPP-12 is developed as a
replacement of super-heterodyne receiver in an existing
solution. Modern solution of this receiver is based on the
principles of software-defined radio. RF block for reception
at the nominal frequency, on which the radar operates, is
analog (Fig.2), and digital signal processing is used, starting
from the moment when the input signal is downconverted to
the intermediate radar frequency. The method of digital
signal processing is very powerful and is being implemented
on an enhanced industrial computer with a built-in FPGA
platform. This method of digital signal processing is
universal and can be applied to other radar types. In the case
that we want to implement digital radar receiver for some
other radar type, it is only necessary to replace the analog
RF receiver.

Section 2 of the paper refers to the basic modernized radar


P-12 elements, with special attention paid to digital radar
receiver, the process of target detection and to digital radar
indicator. Section 3 analyzes in more details operation of the
radar receiving subsystem. The improvement of radar
receiver realization on the basis of FPGA platforms is
presented in section 4. Chapter 5 deals with the
telecommunications subsystem, which provides radar
connection to the network of Serbian Armed Forces.
Finally, the conclusions are given in section 6.

2. MAIN MODERNIZED RADAR P-12


SUBSYSTEMS
The main functional elements of modernized P-12 radar are
[6]:
reconstructed transmitting subsystem;
reconstructed antenna subsystem;
modernized receiving subsystem consisting of:
o digital receiver VHF DP/P-12 (Fig. 1);
o digital radar indicator (DiRI);
o radar data extractor (RDE);
o moving target indicator (MTI);
remote radar control and monitoring subsystem;
radar antenna management subsystem;
telecommunication subsystem;
o global positioning module.

Figure 2. Analog receiver block

2.2. Target detection algorithm


Coherent oscillator (COHO) is used in the radar in the
process of objects detection. COHO is implemented in
software, based on the phase prediction in the processing
algorithm. Modules which are implemented in the detection
process are MTI software defined clutter cancellation,
software defined threshold level by applying Constant False
Alarm Rate (CFAR) algorithm, protection against
intentional and unintentional interference by automatically
switching off the input low noise amplifiers (Fig. 3) and
post-detection.

Figure 3. Low noise amplifier

Figure 1. Digital radar receiver


418

MODERNIZATIONOFTHERADARP12

OTEH2016

2.3. Digital radar indicator

modification of data related to the radar operation,


radar testing and status presentation;
optionally, manual or automatic selection to send
characteristics of monitored targets, tables and radar
status;
transmission and reception of synthesized data and
radar untreated images on the system for reception
and presentation of air situation.

Digital radar indicator is introduced in modernization


instead of panoramic indicator. It provides the following
functions:
display of analog and digital video signals to the
colour monitor in the plan position indicator (PPI)
format, while faithfully emulate the continuous
dimming and the double persistence effect, which
occur on analog indicators;
automatic plot presentation;
trace presentation, whereby it is possible to freely
move the trace table and select trace detail levels of
data, shown in the table;
azimuth and distance display and measurement;
raster and vector maps display;
display of unprocessed radar video signal in A-scan
format;
record and playback of tracks and plots trajectories
and their selection according to different parameters;
remote control of operation radar, input and

The results of the radar proper operation testing and any


irregularity in the performance of radar vital functions are
indicated by the application on radar screen, light indication
and acoustic signalling.

3. BLOCK-DIAGRAM OF RADAR
RECEIVING SUBSYSTEM
Implementation of the radar receiver is based on extensive
experience developing different radar types and generations,
which have Institute IRITEL and Military Technical
Institute [7] - [10].

Figure 4. Analog receiver block


Receiver of P-12 radar is composed of its analog and of
digital block. Analog block is presented in Fig.4, and digital
block in Fig. 5 [11], [12].

portion of the frequency spectrum 150MHz-170MHz pass.


After that, the received signal is amplified in a low-noise
amplifier (LNA). The frequency of the received signal should
then be decreased. The first step in the frequency
downconversion is translation from RF to IF (10,7MHz). This
happens in the mixer (Image rejection mixer). Reference signal
for the operation of the mixer (160,7MHz-180,7MHz) is
generated in a stable local oscillator (STALO) [15]. Further
steps in the signal processing, are amplification (block 20dB),
delay (block delay line) and the amplification (the value of
which is regulated in the block for instantaneous automatic gain
control - IAGC). At the exit of analog part of the receiver the
signal at the intermediate frequency (IF - 10,7MHz) is
generated.

Signal from the receiving antenna is fed to the input of a


band-pass filter over the switch that selects whether the
signal for detection comes from the antenna or from signal
simulator (Fig. 4) [13]. The signal from the simulator is
used to test the proper operation of the radar receiver.
Implementation of this simulator is also based on the
experience of Institute IRITEL and Military Technical
Institute on the development of simulators for different
radar technologies [14].
Band-pass filter only allows that the signals from the desired
419

OTEH2016

MODERNIZATIONOFTHERADARP12

Figure 5. Digital receiver block


The signal at intermediate frequency (IF) is then carried on
to the input of the receiver digital block (Fig. 5). Here, the
signal is first digitized by an A/D converter. Further, the
signal at intermediate frequency (10,7MHz) is processed
exclusively digitally, first in a block for digital conversion
to base band (DDC). Components I and Q of the signal are
extracted, digital filtering is performed and sampling rate
adjusted. The subsequent signal processing and detection
algorithms are applied then to detect objects in the DSP
block, implementing procedures presented in subsection 2.2.
DSP block is based on Texas Instruments (TI) signal
processors.

be realized. Sampling rate and signal coding precision in the


receiver based on FPGA technology are 80MSPS and 14
bits, respectively, comparing to 32MSPS and 10 bits in the
previous radar receiver realization based on principles of
Fig. 5. These better processing characteristics allow
implementation of completely new processing algorithms,
as, for example, adaptive adjusting of matched filter using
impulse response of the system and sample processing on
the intermediate frequency.
Block-diagram of DDC in the system based on FPGA
platform is presented in Fig. 6. As sampling frequency is
80MSPS, it is possible to use increased intermediate
frequency of 21.4 MHz for signal processing. After that,
section of Cascaded Integrator Comb filters (CIC) decrease
sampling frequency 10 times. Half-band filters, which are
implemented after CIC filters, compensate attenuation of
CIC filters. These half-band filters further reduce sampling
frequency two times. At the end, matched filters (which are
realized as Finite Impulse Response filters FIR) allow
optimal signal detection in the presence of additive white
Gaussian noise. Coefficients of FIR filters can be loaded
from the radar receiver processor and they depend on
estimated transmission channel characteristics. FIR filters
reduce sampling frequency five times, thus reaching
complete sampling frequency reduction of hundred times in
DDC.

4. RADAR RECEIVER IMPROVEMENT


On the basis of the results obtained by the first generation of
radar receivers, its second, improved version is developed
and implemented. It is realized on the basis of Xilinx
Virtex-II FPGA platform developed by IRITEL [9]. The
latest generation of IRITEL FPGA platform is based on
Spartan 3 device. It includes all three blocks, presented in
Fig. 5, (A/D, DDC and DSP) in only one component, thus
reducing the system implementation complexity from four
to only one printed circuit board, reducing radar receiver
price and improving system reliability. Besides this, FPGA
technology allows significantly greater processing rate than
DSP technology, so systems with better characteristics can

Figure 6. Block-diagram of DDC


420

OTEH2016

MODERNIZATIONOFTHERADARP12

[2] Jovanovi,P., Mileusni,M., Petrovi,P.: An Approach


to Analysis of AESA Based Radio systems, XII
International Scientific-Professional Symposium
INFOTEH 2013, March 2013, Vol. 12., pp. 372-376.
[3] Petrovi,P.: Institute IRITEL possibilities consideration
in the realization of radar modernization and
development, Round-table Directions of radar
improvement and development to use in Serbian
Armed Forces, Military Technical Institute, 2012., in
Serbian.
[4] Petrovi,P., Marinkovi-Nedelicki,V., Pavi,B.,
Mikovi,B.: Modernization of OAR P-12 based on
software-defined radio and its perspective, Round-table
Software Defined Radio, Military Technical
Institute, 2012., in Serbian.
[5] http://www.tcz.cz/en/radiolocation/radar-systemmodernizations
[6] Borjan,D.: Modernization of surveillance acquisition
radar P-12: In step with time, Arsenal, February 2014.,
pp. 2-8., in Serbian.
[7] Remenski,N., Pavi,B., Mileusni, M., Petrovi, P.:
Practical Realization of Digital Radar Receiver, 49.
Conference ETRAN, Budva, June 2005., pp. 105-108.,
in Serbian.
[8] Remenski,N., Marinkovi-Nedelicki, V., Tadi, V.,
Petrovi, P.: The Control of New Generation Digital
Radar Receiver, INFOTEH 2006, March 2006., Vol. 5.
Ref. B-II-10, pp. 114-118., in Serbian.
[9] Dramianin, D., Vlahovi, V., Remenski, N., Pavi, B.,
Petrovi,P.: FPGA Implementation of the Digital
Radar Receiver, INFOTEH 2006, March 2006., Vol. 5.
Ref. B-II-2, pp. 80-84., in Serbian.
[10] Marinkovi,V., Remenski, N., Tadi, V., Petrovi, P.:
The Software for Control of Digital Radar Receiver
VHF DP/P-12, 51. Conference ETRAN, Budva,
Herceg Novi Igalo, 2007., in Serbian.
[11] Land-based air defence radars, Serbia: VHF DR/P12/18, in the book Streetly, M.: Janes Radar And
Electronic Warfare Systems, IHS Global Limited,
2008-2015.
[12] VHF DR/P-12/18: Digital Radar Receiver, data sheet
IRITEL A.D., www.iritel.com
[13] Marinkovi,V., Pavi,B., Toth,A.: Radar Signal
Simulator for New Generation Digital Radar Receiver,
INFOTEH, Vol. 7. Ref. B-II-18, March 2008, pp. 228231., in Serbian.
[14] Jovanovi,P., Mileusni,M., Pavi,B., Mikovi,B.:
DDS Based Pulse-Dopler Radar Transmitter Simulator,
XIII International Scientific-Professional Symposium
INFOTEH 2014, March 2014., Vol. 13. Ref. B-II-5,
pp. 425-428.
[15] Marinkovi,V., Pavi,B.: Stable Local Oscillator for
New Generation Digital Radar Receiver, YUINFO 09,
March 2009., pp. 1-4., in Serbian.

5. TELECOMMUNICATION SUBSYSTEM
Realized telecommunication subsystem provides radar
connecting to the communication network of the Serbian
Armed Forces. It is especially important that
communication between the commander, who leads the
radar operation, and a higher command level, is realized
using command and warning signals or receiving and
sending short messages, notifications and reports, therefore
without any voice communication. High level of
communication protection from the unauthorized
interception is provided in this way.
Telecommunication subsystem provides networking of
operators, commanders, and separated workplaces. It
realizes data exchange with command information system
(CIS), as well as the submission of data about radar plots
and traces. Components of this subsystem are Ethernet
Switch, optical modem, IP phone, radio set, interface
converter and the communication module.

6. CONCLUSIONS
The most modern hardware and software solutions are
implemented for the radar P-12 modernization. Thus, the
realization of digital radar receiver is based on the principles
of software-defined radio. In addition, the digital radar
indicator is realized as a flexible software module with the
possibility to easily expand its functions. Devices that are
integrated into the P-12 radar system are flexible in
software, mobile in spatial and adaptive in user context.
Automatic control of the radar components and fault
diagnostics allow easier radar technical maintenance. The
most important radar functions (target detection and their
characteristics determination in the air, target tracking,
determination of their proprietary and surveillance data
transmission) are automated. Radar is connected to an
automated system for receiving, processing and sending
radar information. All these activities make it possible to
extend the radar life expectancy, increase its reliability and
ease handling it.
Radar modernization, presented in this paper, considers its
receiving subsystem. Modernization is based on
implementation of the highly advanced semiconductor
technologies and components. The most advantageous
solution is based on FPGA platforms, which allow very high
signal processing rate and, thus, highly improved
characteristics of radar receiver.
The future development will consider modernization of
radar transmitting subsystem, using completely
semiconductor transmitter technology that allows future
implementation of radar pulse compression techniques.

References
[1] Petrovi,P.: Research in Software Defined Radio and
AESA Radar Technology, Serbia-Italia/Status and
Perspectives of the Scientific and Technological
Bilateral Cooperation, 2012., pp. 19-20.

421

DISTRIBUTED TARGET TRACKING IN CAMERA NETWORKS USING


AN ADAPTIVE STRATEGY
NEMANJA ILI
College of Technics and Technology, Kruevac, Serbia and Vlatacom Institute, Belgrade, Serbia, nemili@etf.rs
KHALED OBAID AL ALI
Etimad, Abu Dhabi, UAE and Vlatacom Institute, Belgrade, Serbia, khaled@etimad.ae
MILO S. STANKOVI
Innovation Center, Faculty of Electrical Engineering, Belgrade, Serbia, milsta@kth.se
SRDJAN S. STANKOVI
Faculty of Electrical Engineering, Belgrade and Vlatacom Institute, Belgrade, Serbia, stankovic@etf.rs

Abstract: In this paper problem of distributed target tracking in large scale camera networks using consensus based
algorithms is considered. These networks are typified by sparse communication and coverage topologies - the
restrictions that motivate modification of the existing widely adopted consensus based distributed estimation
frameworks. Several state of the art consensus based algorithms from the literature that take into account these
restrictions are inspected, as well as novel adaptive strategy having much weaker requirements for the communication
load. Comprehensive comparison of the aforementioned algorithms in view of their state estimation performance based
on mean estimation error and disagreement between the nodes estimates is given. Differences of the algorithms in
terms of communication requirements are discussed in detail.plate.
Keywords: Camera networks, Distributed target tracking, Consensus, Decentralized adaptation.

422

DISTRIBUTEDTARGETTRACKINGINCAMERANETWORKSUSINGANADAPTIVESTRATEGY

423

OTEH2016

DISTRIBUTEDTARGETTRACKINGINCAMERANETWORKSUSINGANADAPTIVESTRATEGY

424

OTEH2016

DISTRIBUTEDTARGETTRACKINGINCAMERANETWORKSUSINGANADAPTIVESTRATEGY

425

OTEH2016

DISTRIBUTEDTARGETTRACKINGINCAMERANETWORKSUSINGANADAPTIVESTRATEGY

426

OTEH2016

DISTRIBUTEDTARGETTRACKINGINCAMERANETWORKSUSINGANADAPTIVESTRATEGY

427

OTEH2016

AUTONOMOUS MOBILE ROBOT PATH PLANNING


IN COMPLEX AND DYNAMIC ENVIRONMENTS
NOVAK ZAGRADJANIN
MoD RS, Department for Defence Technologies, Belgrade, e-mail: zagradjaninnovak@gmail.com
STEVICA GRAOVAC
University of Belgrade, School of Electrical Engineering, Belgrade, e-mail: graovac@etf.bg.ac.rs

Abstract: Robots operating in the real world often have limited time available for planning their next actions.
Producing optimal plans is infeasible in these scenarios. Instead, robots must be satisfied with the best plans they can
generate within the time available. A second challenge associated with planning in the real world is that models are
usually imperfect and environments are often dynamic. Thus, robots need to update their models and consequently
plans over time. In this paper, we present and compare three grid-based path planners. First we present Anytime
Repairing A* (ARA*) algorithm that provides suboptimality bounds on the quality of the solution at any point in time.
Than we present an incremental extension of A* algorithm called D* Lite that is used for robot navigation in unknown
terrain, including goal-directed robot navigation and mapping of unknown terrain. At the end we present Anytime
Dynamic A* (AD*) algorithm that is both anytime and incremental. This extension improves its current solution while
deliberation time allows and is able to incrementally repair its solution when changes to the world model occur.
Keywords: path, planning, heuristic, anytime, incremental.
invalidated and they need to start generating a new plan
from scratch. For example, in mobile robot navigation a
robot may start out knowing the map only partially,
assuming that all unknown areas are safe to traverse, and
then begin executing the plan. While executing the plan, it
senses the environment around it and as it discovers new
obstacles it updates the map and constructs a new plan.
As a result, the robot has to plan frequently during its
execution. Anytime planners are not able to provide
anytime capability in such scenarios, as they are
constantly having to generate new plans from scratch. A
class of algorithms known as replanning, or incremental
algorithms are effective in such cases as they use the
results of previous planning efforts to help find a new
plan when the problem has changed slightly. One of them,
D* Lite is particularly useful for heuristic search-based
replanning in artificial intelligence and robotics.

1. INTRODUCTION
In this paper we present search algorithms for planning
paths through large, dynamic graphs. Such graphs can be
used to model a wide range of problem domains in
artificial intelligence and robotics. A* search and
Dijkstras algorithm are two commonly used and
extensively studied approaches that generate optimal
paths through graphs. They guarantee obtaining an
optimal solution when no other information besides the
graph and heuristics (in the case of A*) is provided.
Realistic planning problems, however, are often too large
to solve optimally within an acceptable time. Anytime
planning algorithms try to find the best plan they can
within the amount of time available to them. They quickly
find an approximate and possibly highly suboptimal plan
and then improve this plan while time is available.
Anytime Repairing A* (ARA*) is an anytime version of
A* search. This algorithm has control over a
suboptimality bound for its current solution, which it uses
to achieve the anytime property. It starts by finding a
suboptimal solution quickly and then improves its result.
Given enough time it finds a provably optimal solution.

Anytime Dynamic A* (AD*) ia a search algorithm that is


both anytime and incremental. This algorithm re-uses its
old search efforts while simultaneously improving its
previous solution (as with ARA*) as well as re-planning
if necessary (as with D* Lite).

2. PATH PLANNING
In this paper we concentrate on planning problems
represented as a search for a path in a known finite graph.
We use S to denote the finite set of states in the graph,
succ(s) denotes the set of successor states of state s S,
and pred(s) denotes the set of predecessor states of s. For
any pair of states s, s' S such that s' succ(s) we

While anytime planning algorithms are very useful when


good models of the environment are known a priori, they
are less beneficial when prior models are not very
accurate or when the environment is dynamic. In these
situations, the robot may need to update its world model
frequently. Each time its world model is updated, all of
the previous efforts of the anytime planners are
428

AUTONOMOUSMOBILEROBOTPATHPLANNINGINCOMPLEXANDDYNAMICENVIRONMENTS

OTEH2016

require the cost of transitioning from s to s to be positive:


0 < c(s, s') . Given such a graph and two states sstart
and sgoal, the task of a search algorithm is to find a path
from sstart to sgoal, as a sequence of states {s0, s1,, s}
such that s0 = sstart, s = sgoal and for every 1 i k, s,
si succ(si-1). This path defines a sequence of valid
transitions between states in the graph, and if the graph
accurately models the original problem, a robot can
execute the actions corresponding to these transitions to
solve the problem. The cost of the path is the sum of the
costs of the corresponding transitions. For any pair of
states s, s S we let c*(s, s') denote the cost of a leastcost path from s to s'. For s = s' we define c*(s, s') = 0.

the inconsistency of s is corrected by re-evaluating the gvalues of the successors of s. This in turn makes the
successors of s locally inconsistent. In this way the local
inconsistency is propagated to the children of s via a
series of expansions. Given this definition of local
inconsistency it is clear that the OPEN list consists of
exactly locally inconsistent states. The OPEN queue is
sorted by f(s), so that A* always expands next the state
which appears to be on the shortest path from start to
goal. A* initializes the OPEN list with the start state, sstart.
Each time it expands a state s, it removes s from OPEN. It
then updates the g-values of all of ss neighbors; if it
decreases g(s'), it inserts s' into OPEN.

The goal of shortest path search algorithms such as A*


search is to find a path from sstart to sgoal whose cost is
minimal, i.e. equal to c*(sstart, sgoal). Suppose for every
state s S we knew the cost of a least-cost path from
sstart to s, that is, c*(sstart, s). We use g(s) to denote this
cost. Then a least-cost path from sstart to sgoal can be reconstructed in a backward fashion as follows: start at
sgoal, and at any state si pick a state si-1 = arg
mins' pred(si)(g(s')+c(s', si)) until si-1 = sstart.

Main ( )
01. for all s S
02. g(s) = ;
03. g(sstart) = 0;
04. OPEN = ;
05. insert sstart into OPEN with value f(sstart) = (g(sstart)
+ h(sstart));

In particular, A* maintains g-values for each state it has


visited so far, where g(s) is always the cost of the best
path found so far from sstart to s. If no path to s has been
found yet then g(s) is assumed to be (this includes the
states that have not yet been visited by the search). A*
starts by setting g(sstart) to 0 and processing (hereinafter
this process we call as state expansion) this state first. The
expansion of state s involves checking if a path to any
successor state s' of s can be improved by going through
state s, and if so then setting the g-value of s' to the cost
of the new path found and making it a candidate for future
expansion. This way, s' will also be selected for
expansion at some point and the cost of the new path will
be propagated to its children.

06. ComputeShortestPath();
ComputeShortestPath()
07. while (sgoal is not expanded)
08. remove s with the smallest value f(s) from OPEN;
09. for all s' succ(s)
10.

if (g(s') > g(s) + c(s, s'))

11.

g(s') = g(s) + c(s, s');

12.

parent (s') = s;

13.
insert s' into OPEN with new value f(s') = (g(s')
+ h(s'));

A* maintains four functions from states to real numbers:


- g(s) is the the minimum cost of moving from the start
state to s found so far;
- h(s) (heuristic value) estimates the minimum cost of
moving from s to sgoal of a search;
- f(s) = g(s) + h(s) or key value is an estimate of the
minimum cost of moving from the start state via s to the
goal state;
- The parent pointer parent(s) points to one of the
predecessor states s' of s from which is derived g(s)
and s' is called the parent of s. The parent pointers are
used to extract the path after the search terminates.

Figure 1: A* algorithm (simplified forwards version)


The challenge for shortest path search algorithms is to
minimize the amount of processing. The A* algorithm
whose simplified pseudocode is presented in figure 1.
expands all states from OPEN while sgoal is not expanded.
Using h-values focuses the search on the states through
which the whole path from sstart to sgoal looks promising. It
can be much more efficient than expanding all states
whose g-values are smaller than or equal to g(sgoal), which
is required by Dijkstras algorithm to guarantee that the
solution it finds is optimal.

A* also maintains a priority queue, OPEN, of states


which it plans to expand. In order to explain the purpose
of forming priority queue, let us first introduce a notion of
local inconsistency. A state is called locally inconsistent
every time its g-value is decreased and until the next time
the state is expanded. Suppose that state s is the best
predecessor for some state s': that is, g(s') =
mins'' pred(s')(g(s''))+c(s'', s'))=g(s)+c(s, s'). Then, if g(s)
decreases we get g(s') > mins''pred(s')(g(s''))+c(s'', s')).
In other words, the decrease in g(s) introduces a local
inconsistency between the g-value of s and the g-values of
its successors. Whenever s is expanded, on the other hand,

3. ANYTIME ALGORITHMS
When a robot must react quickly and the planning
problem is complex, computing optimal paths as
described in the previous sections can be infeasible, due
to the large number of states required to be processed in
order to obtain such paths. In such situations, we must be
satisfied with the best solution that can be generated in the
time available.
429

AUTONOMOUSMOBILEROBOTPATHPLANNINGINCOMPLEXANDDYNAMICENVIRONMENTS

A useful class of deterministic algorithms for addressing


this problem are commonly referred to as anytime
algorithms. Anytime algorithms typically construct an
initial, possibly highly suboptimal, solution very quickly,
then improve the quality of this solution while time
permits. Heuristic-based anytime algorithms often make
use of the fact that inflating the heuristic values used by
A* (resulting in the weighted A* search) provides
substantial speed-ups at the cost of solution optimality.
Further, if the heuristic used is consistent (a forwards
heuristic h(s) is consistent if, for all s S, h(s) c(s, s') +
h(s') for any successor s' of s if s sgoal and h(s) = 0 if s
= sgoal), then multiplying it by an inflation factor > 1
will produce a solution guaranteed to cost no more than
times the cost of an optimal solution [1]. ARA* performs
a succession of weighted A* searches, each with a
decreasing inflation factor, where each search reuses
efforts from previous searches [2]. This approach
provides suboptimality bounds for each successive search
and has been shown to be much more efficient than
competing approaches. ARA* limits the processing
performed during each search by only considering those
states whose costs at the previous search may not be valid
given the new value. It begins by performing an A*
search with an inflation factor 0, but during this search it
only expands each state at most once. To implement this
restriction the set of expanded states is maintained in the
CLOSED variable. With this restriction OPEN may no
longer contain all the locally inconsistent states. It is
important, however, to keep track of all locally
inconsistent states as they will be the starting points for
inconsistency propagation in the future search iterations.
Consequently, once a state s has been expanded during a
particular search, if it becomes inconsistent due to a cost
change associated with a neighboring state, then it is not
reinserted into the queue of states to be expanded, but it is
placed into the INCONS list, which contains all
inconsistent states already expanded. Then, when the
current search terminates, the states in the INCONS list
are inserted into a fresh priority queue (with new
priorities based on the new inflation factor) which is
used by the next search. This improves the efficiency of
each search in two ways. Firstly, by only expanding each
state at most once a solution is reached much more
quickly. Secondly, by only reconsidering states from the
previous search that were inconsistent, much of the
previous search effort can be reused. Thus, when the
inflation factor is reduced between successive searches, a
relatively minor amount of computation is required to
generate a new solution. Here, the priority of each state s
in the OPEN queue is computed as the sum of its cost g(s)
and its inflated heuristic value h(sstart, s).

OTEH2016

graph may turn out to be invalid as it receives updated


information. For example, the robot may be equipped
with an onboard sensor that provides updated
environment information as the robot moves [3], [4]. It is
thus important that the robot is able to update its graph
and replan new paths when new information arrives. One
approach for performing this replanning is simply to
replan from scratch: given the updated graph, a new
optimal path can be planned from the robot position to the
goal using A*, exactly as described above. However,
replanning from scratch every time the graph changes can
be very computationally expensive.
D* Lite is extension of A* able to cope with changes to
the graph used for planning [5]. D* Lite initially
constructs an optimal solution path from the initial state to
the goal state in exactly the same manner as backwards
A* (i.e. search is performed from sgoal to sstart). D* Lite
maintains for each state two values estimating path cost to
the goal, which it uses to propagate path costs between
neighboring states in the space. Each state has a base path
cost estimate g, along with another estimate called rhs
that represents the path cost estimate derived from
looking at the g values of its neighbors: rhs(s) = minst
exists (c(s, t) + g(t)) or zero if s is the goal state. In
implementation, each state maintains a pointer to the state
from which it derives its rhs value.
When the algorithm has updated the pointers, the formula
for rhs indicates that the robot should follow the pointer
from its current state to pursue an optimal path to the
goal. A state is called consistent if its g and rhs values are
equal. It is called overconsistent if g > rhs and
underconsistent if g < rhs. Each non-goal state starts with
its g value equal to . When the g values of neighboring
states are lowered (to a finite path cost estimate), the rhs
value of the state is lowered, making it overconsistent. If
at some point the g values of neighboring states are raised
(an increase in the path cost estimate from those states),
the state may have its rhs value raised, which may make it
underconsistent. Inconsistent states eventually trigger
updates in their neighbors, hence overconsistent states
propagate path cost reductions through a region, while
underconsistent states propagate path cost enlargements
through a region.
Like A*, D* Lite uses a heuristic to focus its search and
to order its cost updates efficiently. D* Lite also
maintains an OPEN list of inconsistent states to be
expanded in the current search iteration. At each step it
expands the state on the OPEN list with the minimum key
value. The priority, or key value, of a state s in the queue
is: key(s)=[k1(s), k2(s)]= [min(g(s),rhs(s))+h(sstart, s),
min(g(s),rhs(s))]. A lexicographic ordering is used on the
priorities, so that priority key(s) is less than or equal to
priority key(s'), denoted key(s) key(s'), if k1(s) < k1(s') or
both k1(s) = k1(s') and k2(s) k2(s').

4. INCREMENTAL REPLANNING
ALGORITHMS
The above approaches work well for planning an initial
path through a known graph or planning space. However,
when operating in real world scenarios, robots typically
do not have perfect information. Rather, they may be
equipped with incomplete or inaccurate planning graphs.
In such cases, any path generated using the robots initial

When changes to the planning graph are made (i.e. the


cost of some edge is altered), the states whose paths to the
goal are immediately affected by these changes have their
path costs updated and are placed on the planning queue
(OPEN list) to propagate the effects of these changes to
430

AUTONOMOUSMOBILEROBOTPATHPLANNINGINCOMPLEXANDDYNAMICENVIRONMENTS

the rest of the state space. In this way, only the affected
portion of the state space is processed when changes
occur. Furthermore, D* Lite uses a heuristic to further
limit the states processed to only those states whose
change in path cost could have a bearing on the path cost
of the initial state. As a result, it can be up to two orders
of magnitude more efficient than planning from scratch
using A*. Finally, it is important to point out that D* Lite
can be pre-configured either to search for an optimal
solution ( = 1) or to search for a solution bounded by a
fixed suboptimality factor ( > 1).

OTEH2016

6. AN XAMPLE
Pseudocode of ARA*, D* Lite and AD*, according to
which we implemented these algorithms in Matlab, can be
found in [2], [5] and [6]. In order to compare ARA*, D*
Lite and AD* consider a robot moving on a 7x6 square
grid from the bottom right cell (S) to the top left cell (G),
figure 2. We model the grid as 8-connected: each cell is
connected to its horizontal, vertical, and diagonal
neighbors. This example has no dynamic constraints, so
the robot can move through any path-connected sequence
of cells. The movement costs of diagonal edges are 2,
the costs of other edges are 1, and the costs of edges into
obstacles are . The robot begins knowing a subset of the
obstacles, and can only detect new obstacles among its
immediate neighbors in the grid. The heuristic used by
each algorithm is the larger of the x (horizontal) and y
(vertical) distances from the current cell to the cell
occupied by the robot.

5. ANYTIME REPLANNING
ALGORITHMS
Although each is well developed on its own, there has
been relatively little interaction between the above two
areas of research. Replanning algorithms have
concentrated on finding a single, usually optimal,
solution, and anytime algorithms have concentrated on
static environments. But some of the most interesting real
world problems are those that are both dynamic (requiring
replanning) and complex (requiring anytime approaches).
Anytime Dynamic A* (AD*) is an algorithm that
combines the replanning capability of D* Lite with the
anytime performance of ARA* [6]. AD* performs a
series of searches (in a backward fashion as D* Lite)
using decreasing inflation factors to generate a series of
solutions with improved bounds, as with ARA*. When
there are changes in the environment affecting the cost of
edges in the graph, locally affected states are placed on
the OPEN queue to propagate these changes through the
rest of the graph, as with D* Lite. States on the queue are
then processed until the solution is guaranteed to be suboptimal. AD* begins by setting the inflation factor to
a sufficiently high value 0, so that an initial, suboptimal
plan can be generated quickly. Then, unless changes in
edge costs are detected, is gradually decreased and the
solution is improved until it is guaranteed to be optimal,
that is, = 1. This phase is exactly the same as for ARA*:
each time is decreased, all inconsistent states are moved
from INCONS to OPEN and CLOSED is made empty.
When changes in edge costs are detected, there is a
chance that the current solution will no longer be suboptimal. If the changes are substantial, then it may be
computationally expensive to repair the current solution
to regain -suboptimality. In such a case, the algorithm
increases so that a less optimal solution can be produced
quickly. Because edge cost increases may cause some
states to become underconsistent, a possibility not present
in ARA*, states need to be inserted into the OPEN queue
with a key value reflecting the minimum of their old cost
and their new cost. Further, in order to guarantee that
underconsistent states propagate their new costs to their
affected neighbors, their key values must use admissible
heuristic values. This means that different key values
must be computed for underconsistent states than for
overconsistent
states.
By
incorporating
these
considerations, AD* is able to handle both changes in
edge costs and changes to the inflation factor . This
allows the robot to improve and update its solution path
while it is being traversed.

Black cells represent obstacles and white cells represent


free space. The cells expanded by each algorithm for each
subsequent robot position are shown in grey. The arrow
from state is its current pointer (if any). Pointers of which
resulting path is formed are connected with the blue line.
Each of ARA* and AD* begins by computing a
suboptimal solution using an inflation factor of = 3. The
path cost of this result is guaranteed to be at most 3 times
the cost of an optimal path. Up to this point, ARA* and
AD* have expanded 12 and 13 cells, respectively. When
the robot moves one more step and finds out the changes
in the environment (new obstacle), each approach reacts
differently. Because ARA* cannot incorporate edge cost
changes, it must replan from scratch with this new
information. Using an inflation factor of 2 it produces a
solution after expanding again 12 cells. AD*, on the other
hand, is able to repair its previous solution given the new
information and lower its inflation factor at the same time.
Thus, the only cells that are expanded are the 8 cells
whose cost is directly affected by the new information
and that reside between the robot and the goal. While the
robot moves one more step along this new path the
solution is improved and both ARA* and AD* produce an
optimal result by reducing the value of to 1 (results of
the previous search are reused). The number of expanded
cells in this iteration is greater for ARA* (11 cells) than
for AD* (5 cells) because ARA* in second iteration
replaned from scratch and this process annulled the results
from first iteration. It is important to emphasize that AD*
in second iteration expands cell (6,6) twice, once as
underconsistent and once as overconsistent. Related proof
can be found in [6].
In this scenario we use the basic version of D* Lite
algorithm without an inflation factor (i.e. = 1), so it
produces an optimal solution already in first iteration, but
for this it required 19 cells to be expanded (7 cells more
than ARA* and 6 cells more than AD*) and always
maintains it. This means that D* Lite will generate initial
solutan later than ARA* and AD*. When the changes in
the environment are detected, D* Lite repair actual
solution using results of the previous search. In summary,
it have expanded 27 cells.
431

AUTONOMOUSMOBILEROBOTPATHPLANNINGINCOMPLEXANDDYNAMICENVIRONMENTS

Overall, th total number of cells expanded by ARA* is 35,


by AD* is 26 and by D* Lite is 22. Because AD* reuses
previous solutions and uses inflated heuristic in the same
way as ARA* and repairs invalidated solutions in the
same way as D* Lite, it is able to provide anytime

OTEH2016

solutions in dynamic environments very efficiently.


In figure 3. we can see the results of search (resulting path
and expanded cells) that involves detection of new free
cell.

Figure 2. A simple robot navigation example (detection of new obstacle)

Figure 3. A simple robot navigation example (detection of new free cell)


432

AUTONOMOUSMOBILEROBOTPATHPLANNINGINCOMPLEXANDDYNAMICENVIRONMENTS

OTEH2016

search methods. ARA* is an algorithm well-suited for


operation under time constraints.

7. DISCUSSION AND EXTENSIONS


A common approach used in robotics for performing path
planning is to combine an approximate global planner
with an accurate local planner. The global planner
computes paths through the grid that ignore the kinematic
and dynamic constraints of the vehicle. Then, the local
planner takes into account the constraints of the vehicle
and generates a set of feasible local trajectories that can
be taken from its current position. Most grid-based 2D
(positions x and y) path planners use discrete state
transitions that artificially constrain robots motion to a
small set of possible headings (e.g. 0, /4, /2, etc). As a
result even when these optimal planners are used in
conjunction with some local planner, they can still cause
the vehicle to execute expensive trajectories involving
unnecessary turning. A more comprehensive approach is
to create a higher dimensional planner that incorporates
the kinematic and dynamic constraints of the robot [7].
An approach is to use the cost-to-goal value function of
the global 2D planner as a heuristic function to focus an
anytime global 4D (positions - x and y, vehicle
orientations - and forward speeds - v) grid-based
trajectory planner. However, these higher dimensional
approaches can be much more computationally expensive
than standard grid-based planners and are still influenced
by the results of the initial grid-based solution. Some
prior efforts have used optimal graph planning algorithms
to generate paths intended to be more physically
realizable by vehicles with constrained dynamics. One
effort in this direction is Field D* [8], which extends gridbased D* using linear interpolation to allow nodes
corresponding to arbitrary positions and headings within
each grid cell. This approach has proven quite successful
for robots navigating fields with sparse obstacles, but
seems likely to exhibit the same problems as classic gridbased paths with the more constrained vehicles and more
obstructed environments encountered in urban driving.
Field D* is currently being used by a wide range of
fielded robotic systems (Mars rovers of NASA's Jet
Propulsion Laboratory, Automated E-Gator, etc.).

D* Lite is an incremental algorithm that is efficient in


dynamic environments. It works by performing an A*
search to generate an initial solution. Then, when the map
of terrain is updated, it repairs its previous solution by
reusing as much of their previous search efforts as
possible. As a result, it can be orders of magnitude more
efficient than replanning from scratch every time the
world model changes, as in the case of ARA*. However,
while this replanning algorithm substantially speeds up a
series of searches for similar problems, it lacks the
anytime property of ARA*.
Anytime Dynamic A* (AD*) is an algorithm that is both
anytime and incremental. Anytime D* produces solutions
of bounded suboptimality in an anytime fashion. It
improves the quality of its solution until the available
search time expires, at every step reusing previous search
efforts. When updated information regarding the
underlying graph is received, the algorithm can
simultaneously improve and repair its previous solution.
It thus combines the benefits of anytime and incremental
planners and provides efficient solutions to complex,
dynamic planning problems under time constraints.

References:
[1] Hansen,E.A., Zhou,R.: Anytime heuristic search.
Journal of Artificial Intelligence Research (JAIR) 28,
pp. 267297, 2007.
[2] Likhachev,M., Gordon,M., Thrun,G.S.: ARA*:
Anytime A* with provable bounds on suboptimality,. Advances in Neural Information
Processing Systems, MIT Press, 2003.
[3] Azizi,F., Houshangi,N.: Mobile robot position
determination. Recent Advances in Mobile Robotics,
Dr. Andon Topalov (Ed.), ISBN: 978-953-307-9097, InTech, DOI: 10.5772/26820., 2011.
[4] Borenstein,J., Everett,H.R., Feng,L., Wehe,D.:
Mobile robot positioning sensors and techniques.
Invited paper for the Journal of Robotic Systems,
Special Issue on Mobile Robots., vol. 14/4, pp. 231
249, 1997.
[5] Koenig,S., Likhachev,M.: D* Lite, Eighteenth
National Conference on Artificial Intelligence
(AAAI), pp. 476-483, 2002.
[6] Likhachev,M., Ferguson,D., Gordon,G., Stentz,A.,
Thrun,S.: Anytime Dynamic A*: The proofs, Tech.
Rep.
CMU-RI-TR-05-12,
Carnegie
Mellon
University, Pittsburgh, PA, 2005.
[7] Pivtoraiko,M., Kelly,A.: Differentially constrained
motion replanning using state lattices with
graduated fidelity, Proceedings of the 2008
IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 08), pp. 26112616,
2008.
[8] Ferguson,D., Stentz,A.: Field D*: An Interpolationbased Path Planner and Replanner, International
Symposium on Robotics Research (ISRR), 2005.

The anytime behavior of ARA* and AD* strongly relies


on the properties of the heuristics and the key function.
Additionally, it relies on the assumption that a sufficiently
large inflation factor substantially expedites the
planning process. While in many domains this assumption
is true, this is not guaranteed. Consequently, the choice of
heuristics, key function and inflation factor must be the
subject of analysis in each specific case.

8. CONCLUSION
In this paper, we have presented and compared three gridbased algorithms for path planning.
Anytime Repairing A* (ARA*) provides provable bounds
on the suboptimality of solution it produces. As an
anytime algorithm it finds a feasible solution quickly and
then continually works on improving this solution until
the time available for planning runs out. While improving
the solution, ARA* reuses previous search efforts and, as
a result, is significantly more efficient than other anytime
433

SENSORLESS BRUSHED DC MOTOR SPEED CONTROL USING


NATURAL TRACKING CONTROL ALGORITHM
MILO PAVI
Military technical institute, Belgrade, cnn@beotel.rs
MILAN IGNJATOVI
Military technical institute, Belgrade, milan.ignjatovic@hotmail.rs
NATAA VLAHOVI
PhD studies: School of Electrical Engineering, University of Belgrade, Military Technical Institute, Belgrade,
natasha.kljajic@yahoo.com
MIRKO MILJEN
Military technical institute, Belgrade, mirko0705@gmail.com

Abstract: This paper presents sensorless speed control of a brushed DC motor minimizing the system cost without any
modification on the motor side. It uses natural tracking control algorithm that is robust to systems with unmodeled
dynamics and external disturbances. Control algorithm was verified by physical experiments.
Keywords: motor speed, control, sensorless, natural tracking, robust.

1. INTRODUCTION
Motor speed control without any sensors is very useful in
cases when sensors cannot be used, for example in
applications where the rotor is in closed housing and the
number of electrical inputs must minimal, such as in a
compressor, or in applications where the motor is immersed
in liquids (as a pump). Also, the higher drive cost is due to
sensor wiring and implementation in the motor.
Therefore, speed control presented in this paper does not
require any modification on the motor side because, of
sensing the motor speed, it uses the two wires that provide
power to the motor that already exists.

Equations that describe brushed DC motor could be found


in [1] and [2], and here they are explained briefly.
Motor torque M m is proportional to the magnetic flux
and the motor current Im:
M m = KM Im

where KM is torque constant of the motor. Due to rotation


of the rotor, the back electromotive force (EMF) Em is
induced in the rotor windings. Back EMF is proportional
to the angular speed of the rotor m:
Em = K e m

Natural tracking control algorithm is used because of its


robustness to unmodeled dynamics and external
disturbances.

(1)

(2)

where Ke is the voltage constant of the motor. Torque


constant KM and voltage constant Ke are equal in case of
ideal motor. In reality, there is a slight difference between
them.

2. MATHEMATICAL MODEL OF THE


BRUSHED DC MOTOR

Voltage equation for the brushed DC motor is:

Picture 1 represents equivalent electrical circuit of


brushed DC motor.

U m = Rm I m + Lm

dI m
+ Em
dt

(3)

Where Rm is resistance of the rotor winding and Lm is


inductance of the rotor winding.
Mechanical motion is described by:
M m = J m m

where Jm is rotor moment of inertia.


Picture 1. Electric circuit of brushed DC motor
434

(4)

OTEH2016

SENSORLESSBRUSHEDDCMOTORSPEEDCONTROLUSINGNATURALTRACKINGCONTROLALGORITHM

2.1. Transfer function system representation

Observing the system in time instant t which is just


before the current time instant t and is indefinitely close
to it, state space equation can be written as:

By using (1)-(4), and applying the Laplace transform, one


can get the block diagram that represents the brushed DC
motor, shown in Picture 2, where M L represents
disturbance torque and electrical time constant of the
motor is defined as:
Te =

Um

K M Rm
Te s + 1

Lm
Rm

x = A x + B u + D w

Control is defined as:


u = u + T ( e, e ,...)

(5)

L
+

(10)

(11)

where T ( e, e ,...) represent arbitrary function of error


1
J ms

vector and its derivatives and/or integrals. Now (9) can be


represented as:

x = A x + B u + B T ( e, e ,...) + D w

Ke

(12)

All variables are physical so they are all continuous, i.e.


for t 0+ following relations stands:
x = x , x = x , w = w

Picture 2. Block diagram of brushed DC motor


Transfer function of the system is:

KM
=
W=
U m ( Lm s + Rm ) J m s + K M K e

Based on given equations, and comparing (10) and (12)


one can conclude that:

(5)

B T ( e, e ,...) = 0

Choosing motor angular speed m and motor current Im as


state variables, the state vector is:
T

defines the control, is equal to zero:


T ( e, e ,...) = 0

(6)

(7)

If e is chosen to be the state-space system trajectory (not


only the system output), then (15) represents the desired
stability characteristic because the solution of this
differential equation by error e represents performance of
the stability - tracking.

and the system dynamics given by (3) and (4) is


transformed as:
x = A x + B U m
Km / J m
0
0
A =
; B =

/
/
1
/

R
L
K
L
e
m
m m
Lm

(8)

To achieve exponential stability - tracking, usually the


following function is used:
T ( e, e ,...) = e + e = 0

3. NATURAL TRACKING CONTROL


ALGORITHM

(16)

Then, the control law (11) and chosen error function (16)
make that naturally trackable system has asymptoticexponential convergence of regulated variable, because
the system error e ( t ) can be obtained as a solution of the

Control law is based on concept of Natural Tracking


(NT), presented by Gruyitch and Mounfield in [3]-[6]}
This concept is robust for variation of internal system
dynamics and for external disturbances, which makes it
very interesting for application as possible control law.

following differential equation:


K1e ( t ) + K 0 e ( t ) = 0

Dynamic system is described in state space as:


x = A x + B u + D w

(15)

This allows that, by choosing appropriate function, the


desired dynamic behavior of the system error can be
accomplished.

The system output is motor angular speed:


y = [1 0] x C x

(14)

After multiplication of the last equation by matrix B 1 on


the left side, there follows that function T ( e, e ,...) , which

2.2. State-space system representation

x = [m I m ]

(13)

(17)

This solution is in the following form:

(9)

e ( t ) = C e t K1 / K0

The basic idea of NT concept for system (9) can be


explained in the following way:

and approaches asymptotically to zero.

435

(18)

SENSORLESSBRUSHEDDCMOTORSPEEDCONTROLUSINGNATURALTRACKINGCONTROLALGORITHM

One digital output pin is used to generate control signal


with Pulse Width Modulation (PWM), with modulation
frequency of 980 Hz.

3.1. Natural tracking application


Natural tracking was applied to various control systems,
for example: control of an unstable chemical reaction [6],
electro-pneumatic servo motor [7] and electro-pneumatic
piston drive [8].

In order to get current motor speed value without any


sensor mounted (sensorless), one analog input pin is used
to measure back (EMF). It is an EMF that occurs in
electric motors when there is relative motion between the
rotor of the motor and the external magnetic field. In
other words, the motor acts like a generator as long as it
rotates. The RPM is directly proportional to the back
EMF voltage.

In the case of single input - single output system (SISO)


and first-order natural tracking, the control is defined as:

u ( t ) = u ( t ) + K1e ( t ) + K 0 e ( t )

(19)

where u ( t ) represents the control value in time instance

In order to measure back EMF, motor has to be powered


off (to act as a generator), i.e. modulated MOSFET has to
be switched off during this measurement. Measurement
should be done after some short delay when the power to
the motor was switched off because there is some
transient process to stabilize the generated back EMF
voltage (approximately 1 ms for motor used).

t , and u ( t ) represents the control value which is

calculated and will be applied to the system at the same


time instant, i.e. u ( t ) = lim u ( t t ) .
t 0+

This condition is satisfied in the case of continuous-time


systems without delay, but in the case of discrete-time
systems, u ( t ) = u ( t TS ) , sampling period should be

It takes about 100 microseconds (0.1 ms) to read an


analog input once, so 10 readings are enabled in order to
achieve better accuracy with 10 bit A/D converter in
measuring of the back EMF. Oversampling increases the
resolution and reduces the noise. Total measuring time is
1 ms.

very short.
The demanded angular speed of motor is d, and the error
is defined as:
e ( t ) = d ( t ) m ( t )

Control loop time is 10 ms and it consists of:


switching off the DC power to the motor,
wait 1 ms for transient back EMF process to end,
measure and calculate the mean value of back
EMF, during 1 ms time interval,
switching on the DC power to the motor
proportional to calculated control.

(20)

Natural tracking control algorithm is shown in Picture 3.

K1 s

K0

OTEH2016

Even if calculated control is maximal, it is only executed


for 8 ms in 10 ms interval, limiting the total motor power
to 80 % that way. This limitation is easily overcome by
increasing the power voltage by 20%, if the maximal
motor speed is required.

Physical realization of microcontroller is shown in


Pictures 5 and 6.

Picture 3. Block diagram of natural tracking control

4. PRACTICAL IMPLEMENTATION
Atmel ATmega328 microcontroller was used for
synthesis of the control and the electrical diagram is
shown in Picture 4.

Picture 5. Motor and microcontroller


Picture 4. electrical diagram of the motor speed control

436

SENSORLESSBRUSHEDDCMOTORSPEEDCONTROLUSINGNATURALTRACKINGCONTROLALGORITHM

OTEH2016

Picture 8. Control signal in the case of free rotor shaft


In the second experiment, motor on the motor shaft was
mounted on the eccentric inertial load. Measured and
demanded rotational speed plots are shown in Picture 9,
and the control signal in Picture 10.

Picture 6. Microcontroller with power MOSFET

5. EXPERIMENATAL RESULTS
Parameters of the NT control algorithm are: K0=10 and
K1=1, tuned by iteration method. They depends on the
desired dynamic characteristics of the system, i.e. error
reduction of the regulated motor speed.
Picture 9. Rotational speed of the motor with eccentric
load

Two experiments are performed, in order to test the


robustness of proposed algorithm to external load of the
motor shaft.
In the first experiment, motor with free (unloaded) shaft
was used. Measured and demanded rotational speed are
shown in Picture 7, and the control signal in Picture 8.

Picture 10. Control signal in the case of eccentric load


Comparing these results, one can conclude that NT
control algorithm is robust to external load because
tracking of the desired motor speed is almost the same,
Pictures 7 and 9.

Picture 7. Rotational speed of the motor with free rotor


shaft

It should be noted that in the case of external load, more


energy was used to achieve the same performance,
437

SENSORLESSBRUSHEDDCMOTORSPEEDCONTROLUSINGNATURALTRACKINGCONTROLALGORITHM

Pictures 8 and 10, because, roughly, energy can be


represented as the integral of the control effort, i.e.
hatched area under the graph of the control signal.

[4]

6. CONCLUSION
Brushed DC motor speed control was successfully done
without using any rotational sensor, such as: tachometer,
encoder or potentiometer, i.e. sensorless.

[5]

Natural tracking control algorithm was used because it is


robust to the unknown system dynamics and external
disturbances.

[6]

Experiments verified robustness of control in the cases of


free and loaded motor.
[7]

References
[1] Curcin,M.: Mathematical model and numerical
simulation of electro-mechanical missile actuator,
Scientific-Technical Review, 50(4-5) (2000) 37-45.
[2] Pavkovic,B.: One method of electromechanical
actuator parameters identification, Scientific
Technical Review, 53(1) (2003) 46-52.
[3] Grujic,Lj.T., Mounfield,W.P.Jr.: PD-Control for
Stablewise Tracking with Finite Reachability Time:
Linear Continous-Time MIMO Systems with State-

[8]

438

OTEH2016

Space Description, International Journal of Robust


and Nonlinear Control, 3(4) (1993) 341-360.
Stablewise
Gruyitch,L.T.,
Mounfield,W.P.Jr.:
Absolute Output Natural Tracking Control with
Finite Reachability Time: MIMO Lurie Systems,
Mathematics and Computers in Simulation, 76(5-6)
(2008) 330-344.
Gruyitch,L.T.: Natural Tracking Control, In:
Tracking Control for Linear Systems, CRC Press,
Boca Raton (2013) 119-292.
Grujic,Lj.T., Mounfield,W.P.: PD Natural Tracking
Control of an Unstable Chemical Reaction, In:
Proceedings of the Cairo Third IASTED
International Conference, Cairo, Egypt, (1994) 730735.
Lazic,D.V.: Exponential tracking control of an
electro-pneumatic servo motor, Strojniki VestnikJournal Of Mechanical Engineering, 54(1) (2008)
62-67.
Lazic,D.V.: Practical Tracking Control of the
Electropneumatic Piston Drive, Strojniki VestnikJournal Of Mechanical Engineering, 56(3) (2010)
163-168.

SECTION VI

Telecommunication and
Information Systems

CHAIRMAN
Professor Goran Diki, PhD

EVALUATION OF SELF-ORGANIZING UAV NETWORKS IN NS-3


NATAA MAKSI
School of Electrical Engineering, University of Belgrade, maksicn@etf.rs
MILAN BJELICA
School of Electrical Engineering, University of Belgrade, milan@etf.rs

Abstract: UAVs (Unmanned Aerial Vehicles) are being increasingly used both for military and civilian applications.
These so-called drones are starting to replace conventional aircrafts in reconnaissance and battle missions, rescue
operations, wildlife protection, farming, movie making, and so on. The UAV self-organizing ad-hoc networks are
suitable for a range of problems involving connectivity and information gathering. Although dedicated simulators for
the UAVs are readily available, their communication aspects can be observed in more detail by network simulators.
This paper discusses the use of the ns-3 open-source network simulator, which supports various routing protocols,
wireless modulation schemes, channel propagation models, and queues; moreover, it enables definition of the UAV
movement algorithms through mobility models and includes tools to visualize the UAV positions and connectivity. A
simulation testbed for a typical reconnaissance and information gathering mission with a network of self-organizing
UAVs that move autonomously, without being controlled by a human operator from the ground is described. Two
movement control algorithms that tend to increase both coverage and connectivity while keeping the algorithm simple
and robust are implemented and compared.
Keywords: Unmanned aerial vehicles, ns-3 simulator, mobility, coverage, connectivity.
circumstances of military and rescue operations, operator
skills are essential for a success of the operation.

1. INTRODUCTION

There are certain applications of the UAVs, such as


packet delivery, reconnaissance, or information gathering,
in which the UAV operation can be automated, thus
making a human operator redundant. Such automation not
only reduces cost per UAV, but also enables deployment
of a large number of the UAVs. In particular, during the
operations of constant reconnaissance and information
gathering, the task of the UAVs is to position themselves
over a certain area in a way which achieves coverage of
the terrain and connectivity between the participating
UAVs; this requires sophisticated algorithms for
movement control on a per-UAV level.

UAVs (Unmanned Aerial Vehicles) are increasingly used


for various purposes in military and civilian
organizations. In military applications, drones are starting
to replace conventional aircraft in reconnaissance and
battle roles, because of their smaller price and absence of
crew who might risk its life in dangerous missions. In
civilian applications, the UAVs are also used for
reconnaissance in rescue operations during natural or
human-caused disasters. They are also used for aerial
reconnaissance for wildlife protection [1], farming [1],
providing wireless network coverage [3]-[5]. Moreover,
cameras installed on the UAVs enable applications such
as surveillance and filmmaking. Recently there are plans
to use the UAVs for delivery of packages to recipients
[6].

In general, the process of developing a UAV


communications system and accompanying control
algorithms requires a heavy use of modeling and
simulation. In this paper, we discuss modeling and
simulation of such systems in the ns-3 network simulator
software [7]. We evaluate two algorithms suitable for
reconnaissance and information gathering, that rely upon
self-organizing automated group of the UAVs.

The list of UAV applications is constantly expanding,


involving platforms of different sizes and capabilities.
The UAVs performing military tasks can have long flight
and communication ranges and significant size. Opposite
to them are insect-size UAVs, with very limited flight and
communication ranges due to scarce energy resources.
Such UAVs are typically used for information gathering,
both in military and civilian applications.

The rest of the paper is organized as follows. In the


second section we discuss general simulation framework
for UAV communication networks. Section 3 describes
available features in the ns-3. In Section 4, two algorithms
for automated UAV control are described. Simulation
setup and results are given in Section 5. Finally, Section 6
concludes the paper.

In many of above-mentioned applications, the UAVs are


controlled by a human operator from ground using a radio
link. The human operator can use UAV sensors, such as
camera and position locator to assess the situation of the
UAV and to act appropriately. In unpredictable
441

OTEH2016

EVALUATIONOFSELFORGANIZINGUAVNETWORKSINNS3

2. UAV COMMUNICATION NETWORKS


SIMULATION

3. UAV NETWORK SIMULATION IN NS-3


Our focus is on communication subsystem and automatic
movement algorithm, which are essential both for
connectivity and coverage.

Designing an automatic UAV control for the applications


involving groups of the UAVs is subject of current
research. One of the key aspects of using a group of
automatically controlled UAVs in tasks such as
reconnaissance and information gathering is providing
efficient communication within the group. On the one
hand, the need for miniaturization of the UAVs imposes
limitations on communication subsystem energy
consumption; on the other hand, there is a need to
disperse the UAVs in order to cover a larger area,
compared to ground sensors.

Network simulator ns-3 supports wireless communication


and routing standards, and as such, it provides great base
for exploring different solutions of communication
subsystem. The ns-3 simulator also supports movement of
mobile devices in three dimensional space and simulation
of wireless communication in that space. Different
mobility models can be defined and assigned to mobile
devices, which in our case represent UAVs.
In terms of three criteria of completeness defined in [8],
namely actuation, sensing and communication, the ns-3
supports actuation and sensing in simplified form, and
communication in a very advanced form. As far as the
actuation is concerned, mobility models defined in the ns3 support controlling the UAV speed and direction; they
do not include calculation of forces to which the UAVs
are exposed. This enables simulation of movement
algorithm in perfect conditions, without considering
geometry and propulsion issues. In terms of sensing, the
ns-3 does not include simulation of various types of
sensors that can be installed on the UAVs; it should be
done either in some other simulation tool, or implemented
as an additional module in the ns-3. On the other hand,
the ns-3 provides an extensive support for simulation of
wireless networks and protocols and provides means for
simulation of different telecommunication standards in
the network of UAVs.

A UAV movement control algorithm needs to satisfy both


requirements to (a) keep the UAVs within communication
range and (b) disperse them in order to cover a large area.
Simulation is a method of choice for its design.
The UAV groups simulation can be performed using
various tools, such as Simbeeotic [8], which aims to
provide insight in UAV kinematics, sensors, and radios. It
is written in Java and uses other libraries for advanced
functionality, such as Jbullet [9] for ballistics. It also
supports hardware-in-the-loop simulations which involve
both real and simulated UAVs. There are also other tools
that aim at simulating behavior of the UAV groups [10][11]. However, regarding communication protocols, these
tools do not provide a level of support offered by
dedicated simulators such as ns-3 or OMNeT++ [12].
In this paper, we will discuss UAV swarm simulations in
the open-source ns-3 network simulator. To give an
insight into model complexity, the cloc tool [13] was used
to count number of code lines in projects. Simbeeotic
project currently has approximately 20000 lines of Java
and XML code. Source folder of ns3-project has over
300000 lines of C++ code, with approximately 30000
lines of code in Wi-Fi folder and approximately 5000
lines in mobility folder. Some other functionalities related
to wireless networks such as propagation and ad-hoc
routing protocols are located in different folders. There
are also implementations of other wireless network
protocols such as LR-WPAN and WiMax. Network
simulator ns-3 also provides tools for visualization such
as NetAnim which enable us to observe movement of the
UAVs during the simulation.

Network simulator ns-3 can be used to develop


communication subsystem and movement algorithm.
After that, evaluation of advanced actuation and sensors,
some other tool can be used, such as Simbeeotic.
With this in mind, in our simulation setup we have used
Wi-Fi protocol stack and AODV routing protocol [14]
from ns-3, running on the UAVs and in the command
center. Ns-3 models of the Wi-Fi channel, PHY and MAC
layers, which are implemented based on Yans [15] were
also used. By default, propagation model uses constant
propagation speed and log-distance propagation model.
In order not to change a usual Wi-Fi behavior, we used
default values of the most parameters. However,
communication range has been increased to 500 m, and
size and maximal delay of the Wi-Fi MAC queue have
been reduced in order to avoid too long packet delays.

In the scenario of using a set of automatically controlled


UAVs for reconnaissance and information gathering, the
UAVs should move above the area, and send gathered
information to a command center. The movement
algorithms can then be evaluated by connectivity and
coverage; the former provides information on how many
UAVs can communicate with the command center, while
the latter corresponds to a portion of the target area in
which ground sensors can communicate with the
command center.

When in communication range, the UAVs use AODV


routing protocol to set up routing tables and provide
packet routing over a chain of the UAVs.

3.1. Mobility
Mobility algorithm has been implemented in a class
extended from the ns3::MobilityModel class. This enables
integration with the ns3::MobilityHelper and simple
simulation setup. Speed and direction of a UAV can be
defined inside this mobility class; their values can be
changed in scheduled times, or when some event occurs.
In the simulation, direction is periodically changed. After

In the process of movement algorithm and


communication subsystem design, simulation can be used
for assessing connectivity, coverage and other properties
of the potential solutions.

442

OTEH2016

EVALUATIONOFSELFORGANIZINGUAVNETWORKSINNS3

the predefined time, we check if connectivity with the


command center has been recently established. If positive,
then the next direction is randomly chosen, while
otherwise, the direction is chosen based on the applied
algorithm.

interval coverage is defined with respect to area that had


coverage at any time during measurement interval.

4. UAV MOVEMENT ALGORITHMS


Two mobility algorithms were examined. In the first one,
a UAV moves randomly while it has connectivity, and
returns to a location of last contact when it loses
connectivity [16]. This algorithm is aimed at increasing
coverage by allowing the UAVs to select direction
randomly as long as they have connectivity. After the
connectivity is lost, the UAVs will move back towards the
last point in which they had connectivity.

During the simulation, the movement is constrained to a


predefined rectangle space. Should a UAV reach a border
of this space, it will change its direction and return into
the rectangle.

3.2. Connectivity
We define connectivity as ability of a UAV to
communicate with its command center. Connectivity is
examined by sending test messages from the UAVs to the
command center. The time is split into slots, and we
determine if there is connectivity during each of them.

Should a UAV reach a last point in which it previously


had connectivity, and there is no connectivity in that
point, we have set that the UAV will return towards a
predefined location.

Test messages are sent every 0.9 s. The slot duration is


2 s, so the command center should receive two control
messages in each slot. If no messages are received, we
declare that there was no connectivity in that slot.

We have also considered a modification of this algorithm,


in which a UAV will return towards a predefined location
whenever it loses connectivity. This will ensure more
stable concentration of UAV movement in the vicinity of
the predefined point. For simplicity, this point is the
command center.

Average connectivity during simulation is calculated as a


ratio of the number of slots which had connectivity and
total number of slots of all the UAVs.

5. SIMULATION RESULTS

3.3. Coverage

Simulation was run in ns-3.25 and lasted 5600 s of


simulation time. Table 1 summarizes simulation
parameters. 30 UAVs and grid of 30 rows and 30 columns
with a 200 m step were considered. Command center is
located in the grid center. The UAVs are initially
randomly distributed inside a circle of 600 m radius,
whose center is also located in the command center. UAV
speed is 30 m/s, and they will move 5 s before selecting
next direction. After 5 s, the UAVs that move randomly
will select whether to make 90 right turn, 90 left turn, or
continue to move straight. In order to reduce effects of
initial state, measurements are performed starting at 2000s
from the start of the simulation, and continuing for 3600s
until the end of the simulation.

Coverage is also calculated by checking application-level


connectivity. For this purpose, we introduce a grid of
nodes which represent ground sensors. Picture 1 shows
command center, UAV and ground grid of sensor nodes.
With the assumption of using simple sensors, AODV is
not installed on the sensor nodes, which configure their
routing tables by listening to broadcast messages from the
UAVs. Only those UAVs that currently have connectivity
with the command center will send broadcast messages
intended for the ground grid sensors. When a ground node
receives a broadcast message from a UAV, it sets that
UAV as a next hop for the packets that should be sent to
the command center.

Table 1. Simulation parameters


Parameter
Number of UAVs
UAV cruising speed
UAV altitude
Grid size
Grid step
Number of ground nodes
UAV communications range
Propagation model
PHY layer
Operating frequency
Queue size
Queue maximal delay
Routing protocol
Time slot duration
Simulation duration

Picture 1. Communication during simulation


As in the case of connectivity, time is split into slots and
ground grid sensors which received broadcast messages
during the slot duration are counted. Average coverage is
calculated as a ratio of the number of slots which had
connectivity and total number of slots. On the other hand,

Value
30
30 m/s
250m
6 km 6 km
200 m
900
500 m
log-distance
Wi-Fi
2.412 GHz
300 packets
1s
AODV
2s
5600 s

Node positions during simulation can be observed in


NetAnim tool, which is shown in Picture 2. An area
443

OTEH2016

EVALUATIONOFSELFORGANIZINGUAVNETWORKSINNS3

around the command center is depicted, which has index


0. The UAVs have indices 1 to 30. Near the command
center we can observe the UAVs #11 and #9. We can also
observe a part of the ground grid; its nodes are placed
regularly and have larger indices. The command center is
collocated with a grid node #465 in the lower part of the
image.

significantly differ. It is, however, worth noting that


depending of the adopted movement algorithm the UAVs
will reach different areas of grid at a given time, as their
operating constellations the so-called swarms will
differ.
In order to evaluate changes in connectivity and coverage
as the number of used UAVs increases, we have
performed simulations for different number of UAVs.
Picture 3 shows the results of the connectivity
measurement for the two evaluated algorithms. This
figure shows that connectivity doesnt significantly
change with the increase in the number of UAVs. It also
shows that two algorithms have similar connectivity with
the small advantage of the algorithm with return to the
command center. This advantage can be explained by the
fact that in this algorithm UAV returns directly towards
center of the UAV group, instead of moving first to the
point of last contact. This enables the UAV to restore
connectivity more quickly.

The NetAnim can be used to visualize the movement of


the UAVs; this can help us to check whether a UAV
movement algorithm performs as expected. For this
purpose, let us return to Picture 2, as it shows a situation
at 20.5 s of simulation time. In the course of animation, if
a UAV moves away from the command center and other
UAVs, the animation shows how it stops with the random
movement and returns either to the point of last contact,
or to the command center.

Picture 3. Connectivity for changing number of used


UAVs

Picture 2. Node movement visualization in NetAnim

Picture 4 shows average coverage for different number of


UAVs. This picture shows that two analyzed algorithms
have similar values of coverage, and that coverage
slightly increases with the increase of number of UAVs.

Simulation results are given in Table 2.


Table 2. Simulation results
Return to last
Parameter
contact
Average
87.3%
connectivity
Average
15.1%
coverage
Interval
66.7%
coverage

Return to
command center
90.1%
13.0%
60.4%

Connectivity changes that can be observed during the


simulation are caused by random walk of the UAVs,
which brings them in and out of the communications
range. Further, let us not forget that the ground area of
36 km2 is to be covered with 30 UAVs. Having this in
mind, mean coverage of ~15% which might seem
surprisingly small at the first sight should be regarded as
an average over 3600 s of measurement time. Indeed, the
considered algorithms achieve 66%, resp. 60% of interval
coverage, which is a fair value.

Picture 4. Coverage of the algorithm with return towards


command center for changing number of used UAVs

The obtained results for both examined algorithms do not


444

OTEH2016

EVALUATIONOFSELFORGANIZINGUAVNETWORKSINNS3

Picture 5 shows interval coverage for different number of


UAVs. Two analyzed algorithms have similar values of
interval coverage. This graph shows that interval coverage
has larger values than instantaneous coverage, which can
be explained by changes of the swarm shape during the
simulation. These changes result in coverage of the
different areas during the simulations.

References
[1] Chabot,D., Bird,D.: Wildlife research and
management in the 21st century: Where do
unmanned aircraft fit in, Journal of Unmanned
Vehicle Systems, 3(4) (2015) 137-155.
[2] Guo,T., Kujirai,T., Watanabe,T.: Mapping Crop
Status From an Unmaned Arial Vehicle for Precision
Agriculture Applications, International Archives of
the Photogrammetry, Remote Sensing and Spatial
Information Sciences, Volume XXXIX-B1, 2012
XXII ISPRS Congress, 25 August 01 September
2012, Melbourne, Australia
[3] Zeng,Y.,
Zhang,R.,
Lim,T.J.:
Wireless
Communicatioins with Unmanned Aerial Vehicles:
Opportunities
and
Challenges,
IEEE
Communications Magazine, May 2016
[4] Zhan,P., Yu,K., Swindlehurst,A.L.: Wireless Relay
Communications with Unmanned Aerial Vehicles:
Performance and Optimization, Proc. IEEE
Transactions on Aerospace and Electronic Systems,
vol. 47, no. 3, pp. 2068-2085, 2011.
[5] Maksi,N., Bjelica,M.: ,Pokrivanje oblasti WiFi
mreom korienjem bespilotnih letelica, ETRAN
2016.
[6] Amazon
Prime
Air
[Online],
Available:
https://www.amazon.com/b?node=8037720011
[7] The ns-3 network simulator [Online], Available:
http://www.nsnam.org/
[8] Kate,B.,
Waterman,J.,
Dantu,K.,
Welsh,M.:
Simbeeotic: A simulator and testbed for micro-aerial
vehicle swarm experiments, ACM/IEEE 11th
International Conference on Information Processing
in Sensor Networks (IPSN), 2012, 49-60
[9] JBullet, [Online], Available: http://jbullet.advel.cz
[10] Hiebeler,D.: The Swarm simulation system and
individual-based modeling, Proceedings of Decision
Support 2001: Advanced Technologies for Natural
Resource Management, Toronto, Sept. 1994.
[11] Luke,S., Cioffi-Revilla,C., Panait,L., Sullivan.,K.:
MASON: A new multi-agent simulation toolkit,
Proceedings of the 2004 SwarmFest Workshop,
2004.
[12] Varga,A., Horing,R.: An Overview of the OMNeT++
simulation environment, SIMUTools, Marseille,
France, March 2008.
[13] CLOC Count Lines of Code, [Online], Available:
cloc.sourceforge.net/
[14] Perkins,C.E., Royer,E.M.: Ad-hoc on-demand
distance vector routing Proc. IEEE Workshop on
Mobile Computing Systems and Applications
(WMCSA '99), pp. 90-100, February 1999
[15] Lacage,M., Henderson,T.: Yet another network
simulator, Proceeding from the 2006 workshop on
ns-2: the IP network simulator, 2006.
[16] Orfanus,D., Pignaton de Freitas,E., Eliassen,F.: SelfOrganization as a Supporting Paradigm for Military
UAV Relay Networks, IEEE Communications
Letters, February 2016.

Picture 5. Interval coverage for changing number of used


UAVs

6. CONCLUSION
Communication system is a key component which
enables autonomous networks of UAVs. Automatically
controlled UAVs which constitute these networks have to
maintain connectivity and route data packets. Selection of
communication and routing protocols and configuration
of queue sizes can have significant effect on connectivity
within a UAV network and coverage that it provides.
Simulation software specialized for packet networks
provide adequate means for simulation of UAV networks
communication functionality. The ns-3 network simulator
is among leading open-source simulation tools for packet
networks, and it enables simulation of both
communication subsystem in the UAV networks, and
algorithms which control path directions of the UAVs.
This paper has provided a performance analysis of two
UAV movement control algorithms, which rely upon a
random walk model. Simulation results indicate that they
perform similarly in terms of UAV connectivity and area
coverage. The algorithm which, in the case of
connectivity loss, firstly directs UAV towards point of
last contact provides larger coverage. The algorithm
which immediately directs UAV towards command center
provides larger connectivity. Directing UAV immediately
towards command center will increase the probability of
connecting with the group of UAVs that communicate
with command center, and hence increase average
connectivity during simulation. However, directing UAV
firstly towards point of last contact will result in increased
ground coverage. If the connectivity is not achieved at the
location of last contact, the UAV will proceed towards
command center, resulting in similar results of the two
algorithms. The ns-3 has provided valuable insight into
properties of the two algorithms.
445

CONCEPTUALIZING SIMULATION FOR LAWSONS MODEL OF


COMMAND AND CONTROL PROCESSES
NEBOJA NIKOLI
Strategic Research Institute, MoD, Belgrade, nebojsa2008_g@yahoo.com

Abstract: In its essence, Lawsons model of military command and control process consists of several consecutive
phases as follows: sensing, processing, comparing, deciding and acting. Sensing phase is related to the situational
awareness: collecting data primarily about the opponent forces. Rough data from sensors are then processed through
communication lines to the higher level of decision making hierarchy. Comparing phase assumes contrasting filtered
data towards desired end state, that is, using information distilled from sensing data to achieve efficient performance of
a given orders. After that follows creation of a final decision for action enabling. Military decision making process
assumes generation of more possible options, than follows mutual comparison of the option set according to a given
criteria. After finished decision making follows final decision implementation, which contains formulation of the
decision, issuing order to the appropriate units, acting or realization of order in practice, observing the effects of the
acting and reporting to the higher authority who gave the order. Lawsons command and control model is not
collection of pure technical components, but include human factor as well. Main factors that bring randomness into the
model are variability of performances of human factor and inherent stochastic nature of data collected through the first
phase sensing. Additionally, the consecutiveness of the phases and the nature of the processes suggest reliance on the
tandem queueing models. This paper presents development of conceptual model for simulation of basic Lawsons model
of command and control.
Keywords: concept, simulation, command, decision, queueing.

1. INTRODUCTION

Phenomenon of the Command and Control is


important subject of research in the military branch for a
long time [1]. Comprehensive control and ever present
command make any military organization diffrent from
other organizational entities. In spite of rigidity of overall
command and control, military organization is expected to
be flexibile, robust and agile in a hostile environment.
What makes the military able to fulfil its specific mission
is the "Command and Control" (C2) process supported by
appropriate socio-technical system. Military C2 systems
have to be reliable, selective, resistant to interruption and
capable to process and distribute relevant information in a
prescribed time.

C4I2-Command, Control, Communications,


Computers, Intelligence, Interoperability;
C4ISR-Command, Control, Communications,
Computers, Intelligence, Surveillance and
Reconnaissance;
C5ICommand, Control, Communications,
Computers, Collaboration, Intelligence.

Here, we use short basic term Command and Control


(C2) as general designation without entering discussion
about differences among mentioned aspects from C2 to
C5I. The C2 assumes an information system whose
purpose is to supports command process at all levels of
military units.
Behind the C2 system there are following processes:
collecting, selecting, processing, communicating,
distributing and presenting all relevant data and
information of interest for successfully performing of a
given task or mission. There are many similar definitions
about C2 function in different armies. For the purpose of
illustration, two of them are as follows: "Command and
Control functions are performed through an arrangement
of personnel, equipment, communications, facilities, and
procedures employed by a commander in planning,
directing, coordinating, and controlling forces and
operations in the accomplishment of the mission", [2];
and the second one: "Command and Control is the
exercise of authority and direction by a properly
designated individual over assigned resources in the
accomplishment of a common goal (NATO definition).

Depending on perceptual aspects in different sources and


context, Command and Control system may be found in
litereture under derivate names as follows:
C3Command, Control & Communication;
C2ICommand, Control & Intelligence;
C2ICommand, Control & Information;
C2ISCommand and Control Information Systems;
C2ISRC2I+Surveillance and Reconnaissance;
C2ISTARC2+ISTAR (Intelligence, Surveillance,
Target Acquisition, Reconnaissance);
C3ICommand, Control, Communications,
Intelligence;
C3ISTARC3+ISTAR;
C3ISREWC2ISR+Communications+Electronic
Warfare;
446

CONCEPTUALIZINGSIMULATIONFORLAWSONSMODELOFCOMMANDANDCONTROLPROCESSES

OTEH2016

Counter-action issued from defender usually takes some


time for threat verification, decision making and counteraction preparation. During this time, it could be said that,
attacker platform is waiting to be served. Also, a
decision execution itself, takes some time to be done
(servicing).

2. LAWSONS C2 MODEL
Lawson offered an simple logical concept of Ccommand
and Control process [3]. Many researchers refered to
Lawsons work and his basic concept, so it has become
known as Lawsons model of C2. In its essence,
Lawsons model of military command and control process
consists of several consecutive phases as follows:
Sensing, Processing, Comparing, Deciding and Acting:
The sensing is a starting point. System of different
sensors make scan of the environment of the military
unit. Sensing phase is related to the situational
awareness: collecting data primarily about the
opponent forces. At the tactical and at the operational
level, military units usualy use all available
intelligence infrastructure and resources, in order to
offer verified and reliable informationand, what is a
challenge by itself, [4].
Rough data from sensors are then processed (filtered,
transformed, coded, transmited) through
communication lines to the higher level of decision
making hierarchy.
Comparing phase assumes contrasting filtered and
verified data towards desired end state, that is, using
information distilled from sensing data to achieve
efficient performance of a given orders.
After that follows creation of a final decision for
action enabling. Military decision making process in
general, assumes generation of more possible
options, than follows mutual comparison of the
option set according to a given criteria.
After finished decision making follows final decision
implementation, which contains formulation of the
decision, issuing order to the appropriate units, acting
or realization of order in practice, observing the
effects of the acting and reporting to the higher
authority who gave the order.
Lawsons concept of command and control is a sociotechnical system, that is, it is not a collection of pure
technical components but include human factor in all
phases. There are many possible factors that couses
involving randomnes in the C2 process: human factor
(variability of performances of human factor accross the
time, reliability); fog of war (inherent stochastic nature
of the environmental data including oponent forces);
reliability of technical items; consistency of operating
procedures; etc.

3. C2 MODEL AS QUEUEING SYSTEM


Command and control process can be perceived as a
demand for service process, [5]. Queueing theory, or
theory of waiting lines deals with this kind of processes.
In the demand for service process, there are two main
entities: generators of demand for service, and service
channels. An initial event in sensing phase, after
information procesing and decision making, may produce
demand for some kind of counter action (servicing). For
example: detection and identification of an enemy
(attacker) combat platform in the zone of interest of the
military unit under study (defender), may generate some
kind of counter action from defender towards attacker.

Picture 1. Conceptual queueing model for C2


Additionally, a model of the combat situation became
more complex because attacker usually stay in the zone of
interest only for some limited portion of time. After some
447

CONCEPTUALIZINGSIMULATIONFORLAWSONSMODELOFCOMMANDANDCONTROLPROCESSES

time attacker leaves the zone regardles if he was served or


not. This situation, with impatient clients, is called:
reneging. Other synonyms are: systems with limited
queueing time, or queues with bounded waiting time. In
real system, reneging may appear either in queue or
during servicing. Reneging queues are of particular
importance for many real systems including military
processes and systems, [6]. In fact, one of the first
published papers was written by a military researcher, [7].

OTEH2016

model of Lawsons concept; and Basic Algorithm of


simulation model for Lawsons concept (this overall
graphical presentation is not placed here in order to hold
better visibility).
Next step would be to produce more detailed and more
formal algorithmic presentation with appropriate
graphical symbols specific for chosen computer
programming language. After establishing formal and
consistent algorithm of the model, translation to a
computer code should be less problematic issue, but still
time-consuming as it is usually the case in computer
programming.

Next aspect in modeling C2 process with the tools of


queueing theory, is consecutivenes of main sub-processes.
The nature of the command and control proces induces
perception of five consecutive phases of processing in the
Lawsons model of C2. This may be presented as
multiphase queueing system (consisted of five
consecutive phases of servicing), know also as a tandem
queue.

Depending on desired output (what variables and


parameters would be presented and in a what form), main
structure of the model could be enlarged with additional
blocks which purpose is to register and collect appropriate
statistics needed for later analysis.

Tandem queueing model assumess successive servicing


of the initial demand through more phases. After
servicing in the first phase, transactional demand is going
to join the queue in front of the second service facility.
After servicing in the second phase, transactional demand
is going to join the queue in front of the third server. This
queue and service processing continues until all service
facility is visited. Combined case of tandem queueing and
reneging means that transactional demand may renege
(due to impatience or limited time of staying in the zone
of servicing interest) at any point on his way through
multiphase servicing. Described process is presented at
Picture 1.

One of the most critical measures of performance of the


Command and Control system is time delay from the
moment when a threat is sensed, through all phases of
Lawsons cycle, up to the moment when counter-measure
is undertaken against that threat. Being perceived and
modeled as (complex) queueing system, simulation model
has to capable to give the answer about overall queueing
delays.
Also, simulation model has to be composed in such a way
to be able to offer answers about partial queueing delays
and statistics for each of the consecutive phases of the
Lawsons cycle. This insight may show potential bottlenecks in the overall process. And, bottle-neck
identification in complex network helps to make right
decisions how and where to make changes, improvements
and investments, and where it is not necessary.

However, due to mathematical and computational


complexity of queueing theory, it is hard to obtain
aplicable closed-form analutical solutions for many types
of real queueing systems. It is particularly evident for
queueing models with non-exponential service and nonPoisson arrivals of demands. In such cases, resarchers
usually apply Monte Carlo simulation methodology, [8].

Development of Command and Control systems for


military applications is complex task which is specific and
unique for every particular case. Differences come from
different organizations, missions and equipment, so the
C2 system for an army unit will differ from C2 for an air
force unit or from C2 for big complex and relatively
autonomy platforms (like ships and submarines). Also,
Command and Control systems are subject of periodical
reengineering due to occasional changes, for example:
involvement of new peace of technology (new
communication equipment, or computers, or radar). Also,
researchers have made efforts towards development of
methodology for design, experimentation and testing of
military command and control systems and processes, [9],
[10]. Armies of small countries are also determined to
develop its own command and control systems, [11].

4. C2 CONCEPT FOR SIMULATION


Computational limitations of queueing theory induces
need for application of Monte Carlo simulation
methodology in modeling and analyzing Command and
Control systems, as it is indicated above. That produce
requirements for development and transformation of
starting conceptual model of C2 (Lawsons model)
towards the appropriate simulation conceptual model.
Picture 2 illustrates this transition. Left side at Picture 2
presents basic concept of Lawsons C2 model (boxes and
arrows in vertical order). Central and left side of Picture 2
presents basic algorithm for simulation model of
Lawsons C2 model.

5. CONCLUSION

Both graphical schemes at Picture 2 are put together in


parallel in order to make better insight in development of
simulation model and transitions of the initial concept
towards consistent algorithm of simulation model. In fact,
conceptual queueing model of Lawsons C2 presented at
Picture 1, would be placed in the middle of Picture 2
between left and right part of Picture 2. In order from left
to right as follows: Basic Lawsons concept; Queueing

Command and Control issues, which are related to


different types of military units, headquarters and
platforms, are subject of serious research efforts, while
appropriate theoretical basis is still underdeveloped or
unpublished. Lawsons general concept of Command and
Control is still useable in modeling and analysis and serve
as a starting point for more detailed presentations.
448

CONCEPTUALIZINGSIMULATIONFORLAWSONSMODELOFCOMMANDANDCONTROLPROCESSES

OTEH2016

Picture 2. Algorithmic concept for simulation model of Lawsons Command and Control model
Queueing theory and Monte Carlo simulation modeling
are among the most appropriate scientific tools for
describing and solving problems related to the Command
and Control.

Command and Control system it will be challenging to


follow all state-of-the-art trends of development in the
field.

Acknowledgement

Future work could be multidirectional. In regard of the


presented conceptual model, next step will be
formalization of algorithm for computer implementation
of simulation model and its use in experimentation.
Second, due to confidentiality of all aspects of modern

The work is partially supported by the Ministry of


Education and Science of the Republic of Serbia under
Interdisciplinary Project No.III-47029.
449

CONCEPTUALIZINGSIMULATIONFORLAWSONSMODELOFCOMMANDANDCONTROLPROCESSES

OTEH2016

[6] Bosquet,S.: Queueing Theory with Reneging,


Defence Science and Technology Organization,
Defence Systems Analysis Division, Department of
Defence, Australia, 2005.
[7] Barrer,D.Y.: Queueing with impatient cuastomer and
indiferent clerks, Operations Research, 5(5) (1957)
644-649.
[8] Sullivan,K., Grivell,I., QSIM: A Quueueing Theory
Model with Various Probability Distribution
Functions, Technical Report 11,418, 14 March 2003,
Naval Undersea Warfare Center Division Newport,
Rhode Island, USA
[9] Osmundson,J.: A Systems Engineering Methodology
for Information Systems, Systems Engineering, 3(2),
2000, 68-81.
[10] Skyttner,L.: Systems theory and the science of
military command and control, Kybernetes, 34(7/8),
2005, 1240-1260.
[11] Manjak,M., Miletic,S.: Predlog koncepta komandnoinformacionog sistema brigade KoV Vojske Srbije,
Vojnotehnicki glasnik, LIX(2), 2011, 78-93.

References
[1] Builder,C.H., Bankes,S.C., Nordin,R.: Command
Concepts A Theory Derived from the Practice of
Command and Control, Technical report MR775,
RAND, 1999.
[2] DoD Dictionary of Military and Associated Terms,
www.dtic.mil
[3] Lawson,J.: Command and control as a process, IEEE
Control Systems Magazine, March 1981, 5-12.
[4] Terzic,M., Talijan,M., Slavkovic,R.: How to create
organizational structure of the military intelligence
unit to support the decision-making in joint
operations, Proceedings 7th DQM International
Conference, 29-30 June 2016, Prijevor, Serbia, 517523.
[5] Klingbeil,R., Sulivan,K.: A Proposed Framework for
Network-Centric Maritime Warfare Analysis,
Technical Report 11,447 15 July 2003, Naval
Undersea Warfare Center Division Newport, Rhode
Island, USA.

450

GENERATING EFFECTIVE JAMMING AGAINST GLOBAL NAVIGATION


SYSTEMS
SERGEI KOSTROMITSKY
"Radio Engineering Center of the National Academy of Sciences of Belarus", Minsk, e-mail: rts_nanb@tut.by
ALIAKSANDR DYATKO
"Radio Engineering Center of the National Academy of Sciences of Belarus", Minsk, e-mail: rts_nanb@tut.by
PETR SHUMSKI
"Radio Engineering Center of the National Academy of Sciences of Belarus", Minsk, e-mail: rts_nanb@tut.by
YURY RYBAK
"Radio Engineering Center of the National Academy of Sciences of Belarus", Minsk, e-mail: rts_nanb@tut.by

Abstract: The article treats the possibilities of generating effective jamming against global navigation system users
equipment. It reveals that application of special ways of controlling the system of spatially distributed GNSS jamming
transmitters makes it possible to generate jamming defeating the potentially feasible antijamming means.
Keywords: global navigation systems, adaptive antennas, auto jamming canceller, pattern.
protection enabling the functioning of navigation system
UNE under jamming, in their location, remains topical.

1. INTRODUCTION
The global navigation systems (GPS, GLONASS,
Galileo, Compass/ BeiDou, eventually, IRNSS and
QZSS) are playing an increasingly important role in
human activities. Their impact on military science is
unprecedented. The quality of control exercised over
troops has dramatically improved using GNSS, modern
computer and communication assets. GNSS are very
effective in controlling air traffic and weapons.

2. GNSS VULNABILITY TO JAMMING.

However, in reality the GNSS (Global Navigation


Satellite System) user navigation equipment (UNE) still
plays a mainly secondary role in navigation and weapon
guidance systems, military and special equipment. The
principal reason behind it is exceptional vulnerability of
GNSS UNE to jamming.
To remediate this situation, major designers and
producers of UNE have carried out a wide scope of R&Ds
aimed at promoting UNE jamming immunity.
As the designers claim, the newly created satellite
navigation system has succeeded in receiving GNSS
signals even in the conditions of directed active jammers,
which makes UNE practically immune to electronic
countermeasure (ECM) assets. Among others, the claim
comes from naval-technology.com. According to this
source, tests of this new jam-protected satellite navigation
systems were conducted at a naval base in Maryland,
using UAVs, whereby, due to employment of special
antennas, the impact of directed jamming generators was
successfully neutralized.

The principles of design of all GNSS are very similar and


consist in transmitting mutually synchronized highfrequency broadband signals from a constellation of
navigation satellites (24-30 units) from orbits in the order
of 20 000 km above the Earth. The totality of such signals
from several satellites is simultaneously received and
processed by a ground-based (or on-board) navigation
receiver, whereby, resultant from processing, information
is extracted about the time of arrival of the signal of each
satellite in a uniform time system. The spatial coordinates
of each of those satellites are known in advance. Usage of
the obtained and a priori known information as well of
certain additional, so-called ephemeris, information
makes it possible to unambiguously determine, with high
accuracy, the spatial position of the phase center of the
GNSS receiver antenna.
Continuous coded phase shift-keyed (PSK) signals are
usually employed as navigation signals, with 1-2 MHz
and 10 MHz spectrum width at 1200-1600 MHz carrier
frequency. The law of modulation of the navigation
signals normally represents variations of the so-called
zero sequences of maximum duration (long-duration
recurrent sequences with a low sidelobe level following
compression), called M-codes.
Vulnerability to active jamming is a fundamental specific
trait of GNSS. There are three clear physical factors
causing it:
great signals transmission distance (~20 000 km);

Evidently, the task of developing efficient means of


suppressing radio navigation systems and means of their
451

GENERATINGEFFECTIVEJAMMINGAGAINSTGLOBALNAVIGATIONSYSTEMS

OTEH2016

A sample advisable spread of jamming transmitters


(shown as blue triangles) across a certain territory is
presented in Picture 1. The oval bodies represent
horizontal cross-sections of the antenna patterns of
individual transmitters. The square stands for the system
control post. One can see that such distribution of the
transmitters and orientation of their antennas form the
jamming patchwork blanket wherein the moving victim
object of jamming always hits the main jamming zone of
at least one jamming transmitter (in reality, it is
simultaneously impacted by 3-5 and more signals of the
neighboring transmitters), whereby information about its
own coordinates is lost due to permanent suppression of
reception of GNSS signals.

limited power of the satellite signal (1050 W);


low gain of the satellite transmitter antenna
(evidently, usually not more than 10-15 dB).

Therefore, the power stream density of a signal of a single


navigation receiver at the Earth surface, even if losses are
ignored, is extremely small and does not exceed 10-13
W/m2. Obviously, generating effective jamming by
ground-based transmitters over the actual distances of 30150 km, vs. such a low power of the useful signal, poses
no technical problem. The 30-150 km distance is cited as
actual due to limitation of the line-of-sight caused by
Earths sphericity and limitation of height to which the
jammer antenna can be risen. Even portable GPS
jammers can, over such distances and with emitting
power of a few to tens of Watt, ensure a power excess
over the satellite signal by 40-60 dB, even through the
sidelobes.

3. NOTES ON REASONABLE
ORGANIZATION OF A GNSS
JAMMING SYSTEM.
3.1. Feasible spatial configuration of a GNSS
jamming system.
The above-stated makes it possible to assert that attaining
high values of the jamming-to-signal ratio in jamming
generation poses no technical problem. To ensure a
steady jamming of GNSS receivers, one has to determine
the reasonable level of the jam power within the selected
jamming system configuration. In selecting the spatial
configuration, one should take into account a number of
factors:
firstly, it has been practically ascertained that, in the
frequency range under consideration, jamming is
effective within the line-of-sight distance;
secondly, based on analogy of the principles of
coverage within the near cellular communication
frequency range, we can easily see that configuration
of a territorial anti-jamming system has to be similar
to the structure of GSM 900/1800 cellular
communication system, i.e. be a multi-position one
with the grid step of 1030 km over open terrain
and considerably lesser step in the mountains and
inhabited areas (to ensure the required line-of-sight
effect);
thirdly, the problem of selecting the power of the
jamming signal of a single transmitter is highly
debatable. Evidently, increasing the transmitter
power to, say, a few kilowatt (as some experts
suggest), boosts the jamming effect, increases the
territory overlap factor, makes it possible to reduce
the quantity of transmitters and, generally, reduces
the system cost. However, it is obvious that a
physically stable GNSS jamming system across a
certain territory has to be a multi-position one,
whereby increasing the number of transmitters
complicates counteracting it. Increasing the number
of jamming transmitters also corrupts the potential
quality of suppressing them through coherent
compensation.

Picture 1. Example of deployment of jamming


transmitters in the terrain
Note. In the course of the latest conflict in Iraq (prior to
the toppling of the regime) several sufficiently powerful
GPS jammers were deployed across the country's
territory. As the result of their employment, the aggressor
lost a few dozen of cruise missiles during the first three
days of the conflict.
Once the causes of deterioration of effectiveness of firing
were determined, firings were suspended. Location of the
jammers was pinpointed and they were destroyed. Firings
were resumed with normal efficiency.
Lesson to be learnt. Preferably, it should be lesser power
but more transmitters scattered across the territory. This
would obstruct detection and destruction of the systems
elements, which becomes less vulnerable.

3.2. Reasonable jamming power and law of


modulation.
The years-long experience of using GNSS jammers in
different conditions shows that, based on the criteria:
relatively low per-unit cost of the jammers;
reasonable rise of the transmitters above the ground
(depending on the terrain, commensurable with the
height of buildings, GSM towers, etc.);
employment of mobile transmitter antennas with a
low gain, not to exceed 10-15 dB;
452

OTEH2016

GENERATINGEFFECTIVEJAMMINGAGAINSTGLOBALNAVIGATIONSYSTEMS

possibility of relatively protracted use of the jammers


with autonomous chemical primary power supplies,
the optimum value of the jammer output power
would be 10-50 W.

Using a lower power would entail an unreasonable


increase of the number of transmitters required for
creation of a continuous jamming field. Using a higher
power would deteriorate mobility (meaning, the portable
and towed jammer versions), the autonomous operation
time (at least a few hours), facilitate reconnoitering the
jammers position for their destruction by fire.

The jamming law of modulation is a special subject, that


of know-how. We would note here that even a jammer
in the form of a continuous sinusoid within the / or
/Y code bands will be sufficiently effective.
It has been established that jamming by sheer power can
cause forcing the receivers into non-linear saturation
mode (effectively, blind them), whereas lower power
will result in the effect of disruption of resolution of the
navigation problem without the above-mentioned nonlinear effects. It has also been established that
employment of a jammer with complex modulation
affecting separate sub-systems of the navigation receiver
(delay time, frequency, phase tracking) and suppressing
reception of ephemeris information, provides a
considerable economy of the jam power (or, given all
other equal conditions, increased effectiveness of the
navigation jamming system).

jammer signal spectrum, but not too much in terms of


additional power spent in jamming. A premium of
power in using new navigation signals is achieved
mainly by narrowing-down the satellite antennas
pattern to the diameter of a spot on the ground of a
few hundred kilometers. The declared energy
premium of 20 dB, even though doubtful,
nevertheless does not negate the approach to
jamming under discussion as unfeasible or
ineffectual;
quite likely, the most potentially effective way, out of
the suggested ones, would be to introduce auto
jamming canceller systems into GNSS receiver
designs (usually called adaptive antenna arrays in
the Western literature). A generic structural chart of
an auto jamming canceller is presented in Picture 2. It
is exactly the potential effectiveness of coherent
jamming of GNSS that our Proposal is dedicated to.
/

s0

s2

s1

s3

s4

Analog processing unit


s0

s1

s2

s3

s4

w1
w2
w3

4. POTENTIAL POSSIBILITIES OF
PROTECTING GNSS FROM JAMMING.

w4

The practical possibility of jamming GNSS receivers


demonstrated in the early 2000s has stimulated special
research and improvement of satellite global navigation
equipment. A number of publications have seen the light,
mostly American ones, dealing with tangible
achievements in perfecting the anti-jamming capacity of
GNSS. Yet, it remains evident that no considerable
achievements in enhancing anti-jamming capacity of
GNSS can be expected by virtue of their principles of
operation and the fact that satellite constellations have
already been formed.

e
CD
Picture 2. Auto jamming canceller block diagram
The Picture 2 shows the GNSS receiver main antenna
indexed 0 and four additional antennas indexed 1-4.
Following the analog processing, signals of the additional
antennas are weighed with subsequent summing resulting
on rejection of the jammer from undesirable directions.
The weight vector elements are formed in the control
device CD.

In the present treatise, we are not aiming at detailed


analysis of the ways of improving GNSS anti-jamming,
we shall only mention the most effective ones (in this
case, we will deal only with GPS):
introducing the new L5 frequency channel which will
be used to transmit both the /- and P/Y-code
signals. (Indeed, jamming the new frequency channel
will, to an extent, complicate the jamming
transmitters design, but this is of no particular
importance);
introducing the new law of modulation of the
navigation signal (distinct from the Gold code as
variation of the M-codes mentioned earlier) with an
expanded spectrum announced as especially jamprotected. It is transmitted on the L1 and L2
frequencies. To us it is obvious that this dramatic
solution is not decisive. It does influence the effective

One can confidently assert that, provided correct ways of


controlling the system of spatially distributed GNSS
jamming transmitters are properly selected, it is possible
to generate jamming defeating the potentially feasible
antijamming means.

5. TECHNIQUES OF DEFEATING
ANTIJAMMING MEANS.
The jamming cancellers input signals S0` and
S`={S1`,S4`} represent signals of the GNSS receivers
main antenna S0` and four additional antennas S1`, S2`, S3`,
S4`. Here and below, the bold type stands for vectors and
453

OTEH2016

GENERATINGEFFECTIVEJAMMINGAGAINSTGLOBALNAVIGATIONSYSTEMS

matrices, and the symbol stands for transposing. The


practical number of additional antennas is usually 25.
Following the analog processing, vector S` is transformed
into vector S={S1,S4}T and, for coherent jamming
cancellation, is multiplied by the weight vector W={W1,
W2,,W4}. Vector W, in a case following the complete
adaptation procedure, assumes the value
-1

Essence of the changes controllable variation of


conditionality of the jam correlation matrix, or
employment of blinking (periodically disappearing and
appearing) jammers. The rate of changes is directly
related to the close-loop auto jamming cancellers
bandpass as an automatic control system.
The result of implementation of the proposed principle in
terms of quality is shown in Picture 3. Picture 3 shows
an antenna pattern of a GNSS receivers main channel
while the auto jamming canceller is deactivated, and
directions at three simultaneously functioning jamming
sources.

W=R R0, where R = S S , R 0 = S S - are the jammer


correlation matrix and vector of cross correlation of
signals of the additional and main antennas
correspondingly, + and * - are the Hermitian conjugate
and complex conjugate signs, the overhead line means
temporal (or statistical) averaging.
+

*
0

The description of the cancellation procedure is widely


known from the theory of adaptive antennas. The
procedure of determining the weight vector W cancelling
the jammer in an optimal manner can be realized in a
number of ways as the result of carrying out different
self-tuning procedures or through direct calculations of
different types. The most important reason why we have
initiated this discussion with formulas, and being the
foundation of our proposal, is that efficiency and
convergence of any of the self-tuning procedures
(adaptive one or based on direct calculations), depend on
conditionality of the jammer correlation matrix R,
whereby the worse the conditionality is, the worse is
convergence. In practical terms, it means that when
matrix R is poorly conditioned, the jamming cancellers
tuning will proceed inadmissibly slow, i.e. slower than
required to attain the effect of reliable cancellation of the
jam in real time.

Picture 3. Receivers main channel antenna pattern: a)


auto jamming canceller is deactivated, b) auto jamming
canceller is activated and jam is typical, c) special
algorithm of synchronous jam control is used.

We shall remind here that correlation auto jamming


cancellers are manifestly parametric devices. This means
that their properties, including speed and efficiency,
greatly depend on properties of the received signal. These
properties practically cannot be effectively ridden of this
dependence. In particular, conditionality of the correlation
matrix R has direct influence on duration of the selftuning process [1,6].

Picture 3b shows the result of normal cancellation of


jamming by a correlation auto jamming canceller,
demonstrating the effectiveness of normal suppression of
jamming by a correlation jammer canceller declared by
the authors of the project (minimum 60 dB a propos, an
obviously exaggerated picture). Upon activation of the
special algorithm of synchronous jam control, the depth
of nulls in the pattern will drop sharply. We are predicting
that feasible cancellation efficiency will not exceed 57dB. i.e. the auto jamming canceller will stop resolving
its task normally.

By shaping the jammer simultaneously using several


transmitters, we can easily control conditionality of the
matrix R through synchronous control of power of
individual transmitters and interconnection of the laws of
modulation of the jammers they radiate.

Formulating the essence of the idea.


1. The subject of the discussion is a controllable multiposition system of jamming jam-protected GPS receivers
by sheer power.

6. CONTENT OF THE WORK.


In order to verify the assumptions put forth, a simulation
model of an automatic jamming canceller was realized,
whose receiver elements configuration is shown in
Picture 4 (0 position of the main antenna, 14
position of the canceling antennas). Data from open
literary sources were used in selecting the configuration
[4,5].

2. The system principle of operation consists in


synchronous employment of several spatially misaligned
GPS signal jamming transmitters, while simultaneously
controlling the jamming signals power values and (or)
jamming signal modulation parameters.
3. The objective of simultaneous jams control is to make
the auto jamming cancellers in the GNSS receivers
function constantly in a transitional mode, by changing
the jamming environment.

The main channel antenna pattern (AP) is shaped as a


rotation cardioid. The choice of the shape is not
incidental, as similar antenna patterns are typical for GPS
454

OTEH2016

GENERATINGEFFECTIVEJAMMINGAGAINSTGLOBALNAVIGATIONSYSTEMS

equipment of several producers, e.g. Sarantel Group PLC,


Great Britain. Antennas generating AP of this shape
ensure phase stability of the received signal in any
direction as well as satisfactory gain and ellipticity, in
addition they have small dimensions.

source. The cross section of the altered spatial AP on


formation of a null in the direction of the jamming
source located at an angle of = 30 on logarithmic scale,
is shown in Picture 6. The value at AP null jamming
suppression intensity", was used as the auto jamming
canceller efficiency criterion.
AP null

Picture 6. Spatial AP cross section


The power of the jamming sources in the simulation
model exceeds 1000 times the power of the jamming
cancellers internal noise, which corresponds to power
received by the jamming cancellers antennas from a 10W
jamming source located at a distance of about 100 km (we
assumed the jammers antenna gain to be equal to 10 dB).
The jamming source power modulation followed the law

Picture 4. Configuration of the receive elements


The shape of the canceling channels AP is toroid,
selected as such to ensure suppression of the useful signal
at channel inputs on its arrival from direction
corresponding to = 0. Thus, influence is eliminated
(weakened) on the useful signal adaptation circuits on its
arrival from the zenith direction. The shapes of the
antenna patterns of the main and canceling antennas of
the simulation model are shown in Picture 5.

Pi (t) = P0 (1 + cos(2 f m t + 0 )

(1),

where Po - is the assigned power value;


f m - is the jammer modulation frequency;

0 - is the jammer initial phase.


The results obtained by six experiments with different
source data are presented in Table 1.

Table 1. Modeling results


Number of jamming
1
1
2
2
sources
1st jammer angular position

30 30 30 30

0 0 0 0
2nd jammer angular position

60 60

- 90 90
Modulation presence No Yes No Yes
Jamming signals crosscorrelation factor
st
1 jammer cancellation
41.6 13.0 10.7 7.3
factor (dB)
nd
2 jammer
- 19.7 15.9
cancellation factor
(dB)

Picture 5. Main and canceling channels AP shape


Below, we present a description of the numerical
experiments done using the auto jamming canceller model
developed, as well as conditions of their performance.
In the course of the experiments, we researched the
influence of modulation of the jammer signals power and
cross-correlation of jamming signals on formation of the
auto jamming cancellers jamming zones.

30
0

30
0

60 60
90 90
No No
0.41 0.81
9.9

7.4

11.5 7.1

7. CONCLUSION

Each experimental test used a definite set of source data.

Analysis of the results obtained in simulation modeling


puts to doubt the assertion that employment of the new
jam-protected satellite navigation system renders the user
navigation equipment practically invulnerable to
electronic countermeasures (ECM).

Based on sectioning of the spatial AP at the jamming


canceller output by a plane passing along axis Z from the
jamming source direction, we determined the intensity of
the AP compression in the direction at the jamming
455

GENERATINGEFFECTIVEJAMMINGAGAINSTGLOBALNAVIGATIONSYSTEMS

Using modulation of jammer signals with a cleverly


selected modulation frequency tangibly corrupts
operational efficiency of auto jamming cancellers.

OTEH2016

References
[1] Monzingo,R., Miller,T.: Introduction to Adaptive
Arrays, Wiley, New York, 1980.
[2] Ablameyko,S.V.: Global navigation satellite systems,
BGU, Minsk, 2011.
[3] Brown A., Reynolds,D., Roberts,D., Serie,S.:
Jammer and interference location system design
and initial test results. Proceedings of the ION
GPS99, September, 1999.
[4] Bey,N., Vyachtomov,V.A., Zimin,V.N: Satellite
communication and navigation antennas. Moscow,
2010.
[5] Prikazchikov A., Oganesyan A., Pastukhov A.,
"Industrial-use
jam-protected
GLONASS/GPS
equipment", Aerospace Courier, 2013.
[6] D.S.De Lorenzo, Rife,J., Enge,P., Akos,D.M.:
Navigation Accuracy and Interference Rejection for
an Adaptive GPS Antenna Array, Proc. ION GNSS,
2006.

Simultaneous employment of several jammers from


different directions, even in a quantity lesser than that of
the jamming canceller's degrees of freedom, will result in
the jamming cancellers suppression factor drop by 20-30
dB, whereas employment of cross-correlated jamming
will decrease the jamming cancellers efficiency down to
7dB. The five-fold (7dB) premium in the
signal/jammer+noise ratio will shorten the UNE
suppression range 2.24 times, e.g., from 145km (as
pointed out in [3]) down to 64.78km for a 4W jamming
transmitter.
Quite evidently, even in the face of deterioration of
efficiency of a controllable multi-position jam-protected
GPS receivers jamming-by-sheer-power system, a
rational placement of the jamming transmitters and
control over power and parameters of modulation of the
jamming signals will make it possible to generate a
continuous-solid UNE jamming zone.

456

EFFICIENT POWER FLOW ALGORITHM, MODIFIED ALGORITHM


NAHMAN AND PERI
BRANKO STOJANOVI
Technical testing center, Belgrade, e-mail:stojanovic.branko@rocketmail.com, elektronika@toc.vs.rs
MILAN MOSKOVLJEVI
Technical testing center, Belgrade, e-mail:moskva984@gmail.com, elektronika@toc.vs.rs
TOMISLAV RAJI
Electrical University, Belgrade, e-mail: rajic@etf.rs
Abstract: This paper presents one novel power flow algorithm. Simple matrix equation with bus admittance matrix is used
for network voltage calculation. Losses are obtained by deduction of total load consumption from injected power into the
slack node. Time consuming forward/backward sweep for Jacobian matrix and Y bus admittance matrix construction is
avoided. Simplification and unflexibility is characteristic of many power flow algorithms what is the reason why they can
not be used for distribution network reconfiguration problem solved by simulated annealing technique. Crucial problem
encountered by distribution engineer is feeding computer with incoming data of distribution system what varies from
configuration to configuration and is the most time consuming part of the procedure. In presented programme the only thing
that ought to be done is setting branch to be open admittance to zero. For this reason this power flow algorithm combined
with precise power flow algorithm represents powerful weapon of every distribution engineer for solving dynamic
distribution network problems such as reconfiguration or planning.
Keywords: efficient power flow algorithm, distribution network, distribution network reconfiguration, simulated annealing,
distribution network planning problem.
proportional to the node number [1]. Recent research
suggest some new idea for taking into consideration specific
distribution network topology for which new incoming data
format or exhaustive input data manipulation is necessary
[1]. Widely adopted is forward/backward sweep method
where cumbersome input data feeding for level drawn
network is exploited [1]. The biggest drawback of these
procedures is that input data must be generated and fed all
the time, for each new configuration, what make them
practically useless for dynamic problems as network
reconfiguration and expansion planning.

1. INTRODUCTION
Many problems related to distribution system real
application as optimization, capacitor placing, voltage
regulation, planning, restauration, state estimation and so on
seek efficient power flow algorithm for network voltage
(branch current) and loss calculation Teng; Sari and
alovi [1,2]. Well known characteristics of distribution
system are [1]:
- radial or weakly meshed configuration;
- 3 phase asymmetrical load;
- node point load distribution;
- numerous branch and node presence and
- wide range values for branch reactance and
resistance.

Algorithm proposed in this paper is novel and classical.


It involves building of network admittance matrix which is
only input parameter that changes with reconfiguring while
all other incoming data remain the same and no
renumeration of nodes (not satisfactorily solved Strezoski et
al. [3]) is necessary. This means that forward/backward
sweep when building Jacobian matrix or node admittance
matrix with traditional Newton Raphson or implicit Gauss Z
matrix algorithm is avoided [1]. Simple matrix iterative
operations are applied to calculate node voltages and losses.
Analyzed network can be radial or weakly meshed.
Proposed method is robust but demands efficient power
flow to be applied in the end Stojanovi [4] because of
insufficient preciseness.

These features make traditional power flow methods used


for transmission networks (Gauss-Seidel and NewtonRaphson technique) inappropriate for lack of reliability
concerning
distribution
network
demands
[1].
Simplifications in fast decoupled Newton-Raphson method
are not valid for distribution system [1].
Several power flow algorithms specially designed for
distribution networks can be found in literature [1]. Some of
these methods are based on loop topology as for
transmission systems. Mostly applied is implicit Gauss Zmatrix method which does not exploit feature that
distribution network is radial or weakly meshed. For this
reason number of equations which ought to be solved is

2. EFFICIENT POWER FLOW ALGORITHM


While performing efficient power flow algorithm it is
supposed that node injected currents can be calculated from
457

OTEH2016

EFFICIENTPOWERFLOWALGORITHM,MODIFIEDALGORITHMNAHMANANDPERI

node power loads what is common case in reconfiguration


problem. Network under calculation is weakly meshed and
its node admittance matrix is singular Heydt [5, p.17] so that
it can not be inverted and used for node voltage calculation
directly. Instead of this, simple formula is used and
inversion of singular matrix is omitted. Network supply
node (zero node) is denoted as balance, slack node.

branch admittances set to zero).


Programme input data are:
branch impedance used for branch admittance
calculation, to build node admittance matrix [Y] look
into reference [5, p.17], to learn defining and entering
elements of this matrix
zero iteration for node voltage calculation=12,66 kV
flat start for the network in question and
zero iteration for node injected current vector
calculation.

Recursive formula for iterative node voltage calculation is


Nahman and Peri [6]:

[U ( k + 1)] = [Ydijaginv ] [ J ( k )] +
+ ([UN ] [Ydijaginv ] [Y ]) [U [ k ]]

[U ( k + 1)]

(1)

[ J ( k )]

k-th iteration for node injected current vector

calculation as well as

k+1th iteration for node voltage vector

[ J nulto ]

calculation,

zero iteration for node injected current vector

calculation is given by expression:

[Ydijaginv ] inverse matrix of diagonal matrix with elements

j Qi Pi

representing node admittance matrix [Y] main diagonal,

Vi ( k )

[ J ( k )]

k-th iteration of injected current to node vector

(3)

where,

calculation,

Qi node i reactive power consumption,

[UN ] unit matrix with dimension equals to the number of

Pi node i active power consumption and

network nodes and

[Y ] node admittance matrix with dimension equals to the

Vi ( k ) conjugated node i voltage in k-th iteration.

number of network nodes mentioned above.

Supposition is that node power consumption is known and


constant for all network nodes what is the case for network
in question [8].

First iteration is:

[U (1)] = [Ydijaginv ] [ J nulto ] +


+ ([UN ] [Ydijaginv ] [Y ]) [U nulto ]
calculation and

Current injected into slack node can not be calculated by


above expression. This current equals to the sum of injected
currents in all other nodes (minus sign) what is implemented
in programme GENERALNI PROGRAM POLAZNA
KONFIGURACIJA BEZ PETLJI.

[U nulto ]

4. CONVERGENCE PROOF

[ J nulto ]

(2)

zero iteration of node injected current vector


zero iteration of node voltage vector

calculation=12,66 kV, flat start (for analyzed network).

Equation (1) is of interest. It is supposed that after large


enough number of iterations (k),

Node voltage vector and node injected current vector


dimensions equal to the number of independent nodes of the
network.

[U ( k + 1)] = [U ( k )]

All variables in above expressions are complex.

[U ( k + 1)] = [Ydijaginv ] [ J ( k )] +
+ ([UN ] [Ydijaginv ] [Y ]) [U ( k )]

Convergence is achieved when node voltage deviation in


last two iterations as relative error is not greater than
0.0001.

(4)

After large enough number of iterations we get (1):

[U ( )] = [Ydijaginv ] [ J ( )] +
+ ([UN ] [Ydijaginv ] [Y ]) [U ( )]

Complex network losses are calculated as difference


between injected power in zero node and total load
consumption.

[U ( )] = [Ydijaginv ] [Y ] + [U ( )] +
+ ([UN ] [Ydijaginv ] [Y ]) [U ( )]

Calculated losses with this algorithm are only indicative,


precise power loss must be calculated with efficient power
flow [4] because of an error (look into conclusion).

[U ( )] = [UN ] [U ( )]

3. POWER FLOW PROGRAMME

[U ( )] = [U ( )]

Programme in MATLAB is created: GENERALNI


PROGRAM POLAZNA KONFIGURACIJA BEZ
PETLJI Stojanovi and Moskovljevi [7], for network,
Picture 1 data of Table I Baran and Wu [8] loops broken (tie

[ J ( )] = [Y ] [U ( )] , look in reference [5, p.16].


From equation (1) it can be observed that during calculation
458

OTEH2016

EFFICIENTPOWERFLOWALGORITHM,MODIFIEDALGORITHMNAHMANANDPERI

value of the supplying node voltage may not be constant and


can be bigger than input regarding module. MATLAB
programme keeps this value constant in such way that
always after one iteration and before subsequent it is set to
prescribed 12,66 kV.
For current injected into zero node calculation block of
instructions after for loop is used [7]. Node voltages in last
iteration when convergence is achieved are input to equation
(3) and they define this current at the end of calculation by
means of injected currents into load nodes (it equals to
negative sum of all load nodes injected currents in last
iteration).

5. APPLICATION
A) MODIFIED EFFICIENT POWER FLOW
ALGORITHM NAHMAN AND PERI [6]
APPLICATION ON BARAN AND WU NETWORK
(picture1)

2-22
22-23
23-24

0.4512
0.8980
0.8960

0.3083
0.7091
0.7011

90.00
420.00
420.00

50.00
200.00
200.00

0.9794
0.9727
0.9694

5-25
25-26
26-27
27-28
28-29
29-30
30-31
31-32

0.2030
0.2842
1.0590
0.8042
0.5075
0.9744
0.3105
0.3410

0.1034
0.1447
0.9337
0.7006
0.2585
0.9630
0.3619
0.5302

60.00
60.00
60.00
120.00
200.00
150.00
210.00
60.00

25.00
25.00
20.00
70.00
600.00
70.00
100.00
40.00

0.9477
0.9452
0.9337
0.9255
0.9220
0.9178
0.9169
0.9166

7-20
8-14
11-21
17-32
24-28

2.000
2.000
2.000
0.500
0.500

2.000
2.000
2.000
0.500
0.500

B) APPLICATION RESULTS OF MODIFIED


EFFICIENT ALGORITHM NAHMAN AND PERI [6]
ON BARAN AND WU NETWORK (picture 1) [7]
i = 347 (iteration number when convergence is achieved,
maximum 1000 iterations which is for loop index,
convergence criteria is that maximum relative value, taking
12,66 kV into account, of voltage module deviation in last
two iterations is less than 0.0001)
CONFIGURATION (picture 1):
All black branches closed, all red open, initial
configuration.

Picture 1: Network Baran and Wu [8]

ABSUit1 = node voltage module vector in last iteration


(i=347, iteration number)
12.6600 kV- zero node (slack node=12.66 kV,
0 rad angle)
12.6290- 1 node
12.4855
12.4159
12.3478
12.1785
12.1506
12.1141
12.0744
12.0390
12.0331
12.0205
11.9688
11.9504
11.9393
11.9272
11.9091
11.9029
12.6224
12.5774
12.5686
12.5605
12.4408
12.3571
12.3153
12.1585
12.1308
12.0048
11.9136

Network has 33 independent nodes (0-slack node, 12.66 kV


and additional 32 nodes), 37 branches (32 branches of
black color with sectionalizing-normally closed switches
and 5 branches of red color with tie switches-normally
open), any branch can be switched
In Table 1 are input data (branch resistance and reactance,
active and reactive load and p.u. node voltage obtained by
method Duji, DMS Novi Sad [3] and A.Sari [2])
Table 1: Input data
Branch
0-1
1-2
2-3
3-4
4-5
5-6
6-7
7-8
8-9
9-10
10-11
11-12
12-13
13-14
14-15
15-16
16-17

R ()
0.0922
0.4930
0.3660
0.3811
0.8190
0.1872
0.7114
1.0300
1.0440
0.1966
0.3744
1.4680
0.5416
0.5910
0.7463
1.2890
0.7320

X ()
0.0470
0.2511
0.1864
0.1941
0.7070
0.6188
0.2351
0.7400
0.7400
0.0650
0.1238
1.1550
0.7129
0.5260
0.5450
1.7210
0.5740

1-18
18-19
19-20
20-21

0.1640
1.5042
0.4095
0.7089

0.1565
1.3554
0.4784
0.9373

PL (kW) QL (kVar)
60.00
100.00
40.00
90.00
120.00
80.00
30.00
60.00
20.00
60.00
200.00
100.00
100.00
200.00
20.00
60.00
20.00
60.00
45.00
30.00
35.00
60.00
60.00
35.00
80.00
120.00
10.00
60.00
60.00
20.00
20.00
60.00
90.00
40.00
90.00
90.00
90.00
90.00

40.00
40.00
40.00
40.00

| V | p.u.
0.9970
0.9829
0.9755
0.9681
0.9497
0.9462
0.9413
0.9351
0.9292
0.9284
0.9269
0.9208
0.9185
0.9171
0.9157
0.9137
0.9131
0.9965
0.9929
0.9922
0.9916

459

EFFICIENTPOWERFLOWALGORITHM,MODIFIEDALGORITHMNAHMANANDPERI

11.8737
11.8309
11.8216
11.8189-32 node

OTEH2016

0.9332
0.9638
0.9673
0.9995
0.9922
0.0200
0.0210
0.0211
0.0213
0.1268
0.1288
0.1300
0.4867
0.4928
0.5662
0.5932
0.6263
0.6418
0.6685
0.6517-32 node

ARGUit1 = node argument vector in last iteration (i=347,


iteration number)
0 rad- zero node (slack node=12.66 kV,
0 rad angle)
0.0002- 1 node
0.0012
0.0021
0.0030
0.0019
-0.0008
-0.0005
-0.0013
-0.0019
-0.0019
-0.0019
-0.0035
-0.0046
-0.0052
-0.0056
-0.0067
-0.0069
-0.0000
-0.0012
-0.0015
-0.0019
0.0007
-0.0009
-0.0016
0.0024
0.0033
0.0045
0.0056
0.0072
0.0056
0.0052
0.0050-32 node

Total complex consumption of all nodes (Sload), coplex


loss power (Sloss) and active power loss (Ploss).
Sload=3.7150e+003 +2.3000e+003i- total consumption in
all 32 nodes (kW+jkVar)
Sloss=1.5343e+002 +1.0362e+002i- loss (kW+jkVar)
Ploss=153.4266- active power loss (kW)
i=347 (number of iterations when convergence is achieved)
Slack node injected current in i-th iteration (i=347 when
convergence is achieved), Jitc0
Jitc0 = 3.0556e+002 -1.8986e+002i
module Jitc0=359.7433 (A)
argument Jitc0=-0.5560 (rad).

DVMAX = achieved relative error to 12.66 kV (next to last


and last iteration of calculated voltage)

6. CONCLUSIONS
1) modified efficient power flow (Nahman and Peri
paper) calculated losses for Baran and Wu network
amount to 153 kW while for the same example by
precise method Duji (DMS Novi Sad) [3] 202 kW is
obtained,

1.0e-004 *
0- zero node (slack node=12.66 kV,
0 rad angle)
0.0199- 1 node
0.1243
0.2002
0.2755
0.4631
0.5237
0.5778
0.7057
0.7984
0.8281
0.8316
0.9161

2) the difference results from the fact that zero node


injected current by Nahman and Peri modified method
amounts to 360 A (-0.556 rad) and 364 A (-0.556 rad)
by Duji, Sari [2],
3) even this small difference, that is 4 A, multiplied by
12,66 kV results in surplus of 48 kVA in virtual power
loss and 50 kW in active power loss.
Modified efficient power flow algorithm Nahman and Peri
can be only indicative for power loss calculation in
distribution network.
It is suitable for solving reconfiguration power loss
460

EFFICIENTPOWERFLOWALGORITHM,MODIFIEDALGORITHMNAHMANANDPERI

OTEH2016

[2] Sari,A., alovi,M.: Algorithm for node voltage,


power flow and loss calculation in radial distribution
system, Elektrodistribucija, YEAR.20, NO.3 p. 127140, 1992, paper in Serbian.
[3] Strezoski,V., collaborators, Basic power calculations
for analyses and control of distribution networks,
(study in Serbian), Institute for power and electronics,
University of technical science, Novi Sad 1998.
[4] StojanoviB.: Efficient automatic program for voltage
and current calculation in large scale radial balanced
distribution networks without transformers, OTEH
2011.
[5] Heydt,G.,T.: Computer analysis methods for power
systems, Purdue University, Macmillan Publishing
Company, New York, Copyright 1986, 359 (p)
[6] Nahman,J.M., Peri,D.M.: Optimal Planning of Radial
Distribution Networks by Simulated Annealing
Technique, IEEE transactions on power systems, Vol.
23, No. 2 pp.790-795, May 2008.
[7] Stojanovi,B.I., Moskovljevi,M.: Matlab Program,
GENERALNI
PROGRAM
POLAZNA
KONFIGURACIJA BE ZPETLJI, 2015.
[8] Baran,M.E., Wu,F.F.: Network Reconfiguration in
Distribution Systems for Loss Reduction and Load
Balancing, IEEE transactions on power delivery, Vol.
4, No. 2, pp.1401-1407, April 1989.
[9] Lavorato,M., Franco,J.F., Rider,J.M., Romero,R.:
Imposing Radiality Constraints in Distribution System
Optimization Problems, IEEE transactions on power
systems, Vol.27, No.1, pp.172-180, February 2012.
[10] Stojanovi,B.: Simulated annealing method and its
application to capacitor placement problem in radial
distribution networks, Masters thesis in Serbian,
University of electrical engineering, Belgrade, 1997.
[11] Jiang,D.: Electric distribution system reconfiguration
and capacitor switching, Masters thesis, Worcester
Polytechnic Institute, 77 p, May 1994.
[12] Jeon,Y.J., Kim,J.C., Kim,J.O., Shin,J.R., Lee,K.Y., An
Efficient Simulated Annealing Algorithm for Network
Reconfiguration in Large-Scale Distribution Systems,
IEEE transactions on power delivery, Vol.17, No.4,
October
2002.

reduction problem by simulated annealing technique.


Connectivity but not radiality problem is solved in Nahman
and Peri article.
To commence power loss calculation by simulated
annealing method it is necessary to build by random number
generator the same number of open branches (all the time of
execution) maintaining number of closed branches to be by
one less than number of nodes Lavorato et al [9].
Weakly meshed configurations of Baran and Wu network
are cheapest so they can not be used for initiating of
algorithm. It is necessary to discard configuration as it is not
radial, has loops.
It is mandatory to check radiality what is illustrated in
reference [9].
Unflexibility or simplification is characteristic of many
power flow algorithms for which reason they can not be
used for solving reconfiguration power loss reduction
problem by simulated annealing Sari and alovi,
Stojanovi [2,10]. Problem encountered by distribution
engineer is feeding in computer with concrete distribution
network data what differs from configuration to
configuration and requires much time for programme
preparing and running. For Markov chain length 100, one
hundred different configurations are generated so that
programme should be run 100 times with one hundred
different input data batch. Double linked-list data
structure for representing distribution network input data is
not well explained Jiang, Jeon et al [11,12]. In presented
algorithm all that ought to be done is setting open branch
admittance to zero. For this reason developed algorithm
applied with precise power flow algorithm represents
powerful weapon of every distribution engineer when
solving dynamic distribution network problems such as their
reconfiguration or planning.

ACKNOWLEDGMENT
Third author wishes to express gratitude to Serbia Ministry
of education, science and technology development namely
Project III 42009 Inteligent power networks.
References
[1] Teng,J.H.: A Direct Approach for Distribution System
Load Flow Solutions, IEEE transactions on power
delivery, Vol.18, No.3, pp.882-887, July 2003.

461

SOLID STATE L-BAND HIGH POWER AMPLIFIER USING GAN HEMT


TECHNOLOGY
ZVONKO RADOSAVLJEVI
Millitary Tehnical Institut, Belgrade, zvonko.radosavljevic@gmail.com
DEJAN IVKOVI
Millitary Tehnical Institut, Belgrade, divkovic555@gmail.com
DRAGAN NIKOLI
Millitary Tehnical Institut, Belgrade, nikolicdragansiki@gmail.com

Abstract: In this paper, we design a modular L-band high speed pulsed high power amplifier (HPA) using GaN HEMT
technology. One module of power amplifiers have the high voltage and high speed switching circuit, based of class-AB
power amplifier. The source and load impendence is balanced from eight equal modulus and calculated by performing
optimal output peak power. The functional model PA provides power added frequency (PAE) of 61% together with
power gain of 11 dB at frequency band 1.2-1.3 GHz. As results of test, after outputs combining, the total outputs peak
power is 800W obtained at 1.3 GHz carrier frequency.
Keywords: L-band radar transmitter, HPA, GaN FET.
where the antenna is made up of a number of modules
each incorporating both transmit and receive path
amplifiers (T/R module).

1. INTRODUCTION
Usually, radars are used exclusively for military
applications. Now, typical implementations include
weather observation, civilian air traffic control, highresolution imaging along with various military
applications such as ground penetration, ground and air
surveillance, target tracking, and fire control. When first
developed however, their main application was the
identification of moving objects and targets over long
distances and to achieve this purpose, early radar
transmitters needed to produce RF output powers of the
order of 100-500 of Watts. The large power nature of
radar transmitters lead to the development of vacuum
electron devices (VED) such as travelling wave tubes
(TWT), klystrons, magnetrons, gyrotrons, and cross field
amplifiers (CFA). Civilian and military radar systems rely
on amplifiers to deliver pulsed and continuous wave
power. The high power RF pulse is utilized in many
applications including laser excitation and radars, etc. The
most application of RF high power pulse signal is using in
radar systems. We conducted the research of high speed
RF high power pulse signal, but is not compared to others
applications [1,2]. Solid state power device have smaller
output power and operating frequency, but wider
bandwidth and higher reliability in many contemporary
radar applications than vacuum tube [3,4].

Asolid-state power devices which have the modular


output power (up to 100W at L-band) have been
developed [5]. We can obtain 500W output power by
combining several microwave devices in modular
concept. The pulsed power amplifier can be realized by
using a switching method.
The rest of the paper is organized as follows. In Section 2,
the high power voltage switching circuits (conventional
and operational) is presented. In Section 3, we present a
design of new L-band high power amplifier. Results of
simulation are given in Section 4 to validate the proposed
HPA. Section 5 presents some concluding remarks.

2. HIGH VOLTAGE SWITCHING CIRCUIT


The radar and broadcast communication applications, T/R
components are key elements. The performance of the
T/R components will have an effect on the radar system.
Inside these T/R components, the high-power amplifier is
one of the very important microwave modules used in the
final stages of radar and radio transmitters. Powercombining and/or phased array techniques need to be
employed to achieve the required output levels with solid
state devices, thus their application to such systems has so
far been limited. The main purpose of the practical tests
was to assess the suitability of the commercial GaN
HEMTs technology to radar applications. This meant
determining whether:

As a result of unavoidable losses in combining the outputs


of many solid-state devices, it is advantageous to avoid
combining before radiating, since combining in space is
essentially lossless. For this reason, many solid-state
transmitters consist of amplifier modules that feed either
rows, columns, or single elements of an array antenna.
The most common example is phased array radar systems

the devices could be pulsed fast enough i.e. at a pulse


repetition frequency (PRF), which would be
462

SOLIDSTATELBANDHIGHPOWERAMPLIFIERUSINGGANHEMTTECHNOLOGY

representative of actual radar systems,

OTEH2016

3. DESIGN OF L-BAND HIGH POWER


AMPLIFIER

the amplified pulses could meet the required


specifications for rise and fall times, which in some
systems may be less than 100 ns,

A example of schematic diagram of the four-way


Wilkinson hybrid combined solid-state power amplifier
[7], is shown at picture 3. The four-way combined SSHPA
was fabricated on the substrate. The amplifier module
consists of Wilkinson couplers, an input matching circuit,
an output matching circuit and a DC biased circuit. The
four-way GaN-based amplifiers are connected in parallel
with the two-stage Wilkinson hybrid couplers, which
operate as a power divider at the input and a power
combiner at the output, respectively.

the inter-pulse noise could be kept below a sufficiently


low level so as to meet typical Radar specifications,
they could withstand pulsing at high voltages without
being damaged or burnt out,
there would be consistency across the amplified pulses
i.e. there would be no difference in pulse shape and
power across successive pulses.
Amplifier stability could be maintained in a pulsed regime
over a wide range of operating conditions i.e. at different
bias levels, input powers and PRFs [5]. The output port of
this circuit will be connected to a power amplifier. Picture
1. shows the conventional switching circuit. The
operating current of switching circuit is controlled by R
Load. Q4 is p-channel power MOSFET for switching the
supply voltage of power amplifier according to the input
pulse. Q1~Q3 is a drive circuit for fast switching of Q4 by
using input TTL level (5 V/0 V). It is based on a CMOS
inverter circuit [6].
The small R1 has a fast switching speed, and then the
current consumption is increased. Q4 must be selected by
considering power handling capacity and drain source
resistance [Rds(on)] for on state. Operational switching
circuit is shown in the picture 2.

Picture 3. Example of schematic diagram of the GaNbased four-way combined amplifier.


On the basis of above mentioned example, we built
SSHPA radar transmitter. Transmitter is built modular.
Each of eight modules, has an output of about 100 W.
Modules are coupled to the output by the Wilkinson
combiner [8]. Appearance transmitter is given by the
Picture 4. Each module receives excitation from preamplifiers, with the corresponding phase shift, depending
on the number of modules. Especially, we designed
circuit for analysis and control of all signals and
conditions. Thus can be detected a malfunction of the
transmitter operating.

Picture 1. Principe of conventional switching circuit

Picture 2. Example of the operational switching circuit


The above example is used for the L-band radar design
with GaN HEMTs applications technology. This principle
is used for each module.

Picture 4. HPA transmitter-built appearance

463

SOLIDSTATELBANDHIGHPOWERAMPLIFIERUSINGGANHEMTTECHNOLOGY

OTEH2016

4. RESULTS OF SIMULATIONS
In order to the test of functionality of the HPA transmitter,
we applied helix antenna, right circular orientation.
Antenna is located at the point of receipt, and realized to
operate in the axial mode, and as in Picture 5 [9].

Picture 7. The description of terrain measuring method


Picture 5. Dimension of helix test antenna.

In order to comparison, the Picture 8. is displayed spectral


radiation pattern signal from radar transmitter at the point
of receipt and Picture 9. is displayed spectral radiation
pattern signal from solid state HPA, at the frequency 1300
MHz.

Spatial orientation diagram of helix antenna, at the


frequency band L2=1228MHz, is given on Picture 6.

Picture 8. The spectral radiation pattern signal from


original radar transmitter
Picture 6. Spatial orientation diagram of helix antenna
(frequency band L2=1228MHz)
As confirmation of successful operation of the transmitter,
were measured spectral radiation patterns of transmitting
antenna radar at the point of receipt. The measurements
were made across the whole range of operating
frequencies
(1200-1350MHZ).
The
laboratory
measurements of pulse power are given by the Table 1.
The measurements were made at the Hewlett-Packard
Power Meter 436A, with artificial load at the output. The
input is used Hewlett-Packard Pulse Generator 8015A.
Table 1. Pulse power versus operating frequencies
Frequency [MHz]
1200
1250
1300
1350

Picture 9. The spectral radiation pattern signal from solid


state HPA transmitter

Pulse power [W]


859
885
905
870

5. CONCLUSION
The design an L-band high speed pulsed power amplifier
using LDMOS FET transistors is present in this paper. To
design the pulsed power amplifier, we proposed the novel
switching circuit with the fast fall time and the high
switching voltage for a pulsed power amplifier. The
output signal amplification is used modular principle with

After laboratory testing, the solid state HPA is tested at


the terrain. The description of the terrain measuring
method is given at the Picture 7. A device for analyzing
the spectrum is the AGILENT.

464

SOLIDSTATELBANDHIGHPOWERAMPLIFIERUSINGGANHEMTTECHNOLOGY

8 independent modules. All models on the common


outputs are connected to power combiner. In order to test,
the SSHPA is connected to the antenna radar and
recorded spectral waveform.

OTEH2016

applications. IEEE Electron Device Letters, 2005.


26(6): p. 348-350
[5] Yi H., Hong S., Design of L-Band High Speed Pulsed
Power Amplifier Using LDMOS FET, Progres in
Electromagnetics Research M, Vol.2. 153-165. 2008.
[6] Wu, Y.-F., et al. Field-plated GaN HEMTs and
amplifiers.
IEEE
Compound
Semiconductor
Integrated Circuit Symposium, CSIC, Oct 30-Nov 2
2005. Palm Springs, CA, United States: Institute of
Electrical and Electronics Engineers Inc., New York,
NY 10016-5997, United States.
[7] Zhao, S.-C., B.-Z. Wang, and Q.-Q. He, Broadband
radar cross section reduction of a rectangular patch
antenna, Progress In Electromagnetics Research,
PIER 79, 263275, 2008.
[8] Shi, Z.-G., S. Qiao, K. S. Chen, W.-Z. Cui, W. Ma,
T. Jiang, and L.-X. Ran, Ambiguity functions of
direct chaotic radar employing microwave chaotic
colpitts oscillator, Progress In Electromagnetics
Research, PIER 77, 114, 2007.
Fornetti, F., M.A. Beach, and K.A. Morris. Time and
frequency domain analysis of commercial GaN
HEMTs operated in pulsed mode. Asia Pacific
Microwave Conference APMC 2009, Singapore,
Singapore: IEEE Computer Society 2009.

As results of test, the output peak power is more than 900


W at 1.3 GHz. The proposed switching circuit seems to be
applicable in the pulsed amplifier using all solid-state
power devices. The power and signal waveforms
measured were the same as the original radar transmitter.

References
[1] Skolnik, M.I., Radar Handbook. Second ed. 1990:
McGraw-Hill.
[2] Micovic, M., GaN double heterojunction field effect
transistor for microwave and millimeterwave power
applications. IEEE International Electron Devices
Meeting, IEDM, Dec 13-15 2004. 2004. San
Francisco, CA, United States: Institute of Electrical
and Electronics Engineers Inc., Piscataway, NJ
08855-1331, United States.
[3] Chen C., Hao Y., Feng H., Gu W., Li Z.,Hu S., Ma
T., An X-band four-way combined Ga-N solid state
power amplifier, Journal of Semiconductors, Vol.31.
No.1. January 2010.
[4] Moon, J.S., et al., Gate-recessed AlGaN-GaN
HEMTs for high-performance millimeter-wave

465

PERFORMANCE EVALUATION OF NONLINEAR OPTIMIZATION


METHODS FOR TOA LOCALIZATION TECHNIQUES
MAJA ROSI
Faculty of Mechanical Engineering, University of Belgrade, Belgrade, mrosic@mas.bg.ac.rs
MIRJANA SIMI
School of Electrical Engineering, University of Belgrade, Belgrade, mira@etf.rs
PREDRAG PEJOVI
School of Electrical Engineering, University of Belgrade, Belgrade, peja@etf.rs

Abstract: Source localization based on the time of arrival (TOA) measurements is very important in military and civil
applications such as Wireless Sensor Networks (WSNs), sonar, radar, security systems, health monitoring, etc. Some of
the localization measurements are corrupted by errors which always exist no matter which localization techniques are
used. Therefore, TOA source localization model, in the presence of additive noise, can be formulated as an optimization
model with the sum of squared residuals as the objective function. This paper presents the application of three
nonlinear optimization methods: the Steepest Descent, the Newton-Raphson and the Gauss-Newton methods and their
performance is compared with each other in terms of localization accuracy. Numerical simulation results illustrate the
performance comparison of these different proposed nonlinear optimization methods with different initial values and
signal-to-noise ratio (SNR). The corresponding CramerRao Lower Bound (CRLB) on the localization errors is
derived, which gives a lower bound on the variance of any unbiased estimator. Finally, the simulation results analysis
of the performance of the proposed gradient-based optimization methods are evaluated and compared with the CRLB
and the closed-form LLS method.
Keywords: Wireless Seknsor Networks, Localization, Optimization, Time of Arrival, Signal-to-noise ratio
There are numerous powerful estimation methods
available in literature, such as the least square (LS) and
the maximum likelihood (ML) methods, which are
efficiently employed to successfully estimate the location
of an emitting source.

1. INTRODUCTION
Finding the position of an emitting source based on TOA
measurements from a set of sensors whose positions are
known is a fundamental problem in many applications
such as military target tracking [1], environmental
monitoring, telecommunications [2], security systems, [3]
etc. In many of these applications, determining the
position of an emitting source from multiple sensors,
whose measurements are corrupted by errors, is a key
requirement. Therefore, the localization problem can be
solved using various localization techniques such as the
time of arrival, the time difference of arrival (TDOA), the
received signal strength (RSS), or the angle of arrival
(AOA). The focus of this paper is TOA based localization
method, because of its high ranging accuracy and
relatively simple hardware structure. The distance
between the emitting source and the set of sensors is
generally estimated by measuring time of travel of
acoustic or electromagnetic signal. The main sources of
error arise from the multipath propagation, the additive
noise and the time synchronization. Hence, the TOA
based localization algorithm requires synchronization
between the emitting source and all of the sensors in the
WSN and high precision timing [4]. In many localization
algorithms, the line-of-sight (LOS) range measurements
are modeled using the zero-mean white Gaussian noise.

In this paper, the TOA positioning model is investigated,


in the presence of the measurement errors, where the
localization problem can be defined as a least-squares
estimation problem, which minimizes the objective
function represented as a sum of squared the residuals.
The LS estimation problem can be divided in two
categories such as, the linear least squares (LLS) and
nonlinear least squares (NLS).
The NLS estimation problem is known as unconstrained
nonlinear optimization problem [5]. The gradient-based
unconstrained optimization methods such as, the Steepest
Descent, the Newton-Raphson and the Gauss-Newton
methods are applied to a given NLS optimization
problem. These algorithms have high localization
accuracy and require a good initial value to avoid
convergence to the local optimum.
The LLS method is a suboptimal technique that is widely
used in TOA localization because of its low
computational complexity. Therefore, it can be used to
obtain an initial position estimate for the initialization of
high-accuracy positioning algorithms, such as the NLS
466

OTEH2016

PERFORMANCEEVALUATIONOFNONLINEAROPTIMIZATIONMETHODSFORTOALOCALIZATIONTECHNIQUES

approach. Thus, the NLS problem is transformed into the


LLS problem using the Taylor series expansion.

where c is the speed of light, tl is the TOA of the signal


to the lth sensor, and nl is the additive white Gaussian
noise (AWGN), whose distribution is the normal
probability function N 0, l2 with zero-mean and the

The localization accuracy of the solution is related to the


information in the measurements through the CramerRao
lower bound, which provides a lower bound on the
variance of any unbiased estimator. The performance of
the proposed TOA based estimators is compared to the
CRLB through the root mean square error (RMSE) in
order to evaluate the localization accuracy.

known variance l2 . However, in this case, the circles


with radius rl do not intersect at the single point due to
the errors in TOA measurements. Therefore, those circles
form a small region, which can be regarded as the size of
the localization error, as illustrated in Picture 1.

The remainder of this paper is organized as follows. In


Section 2, the source localization problem in WSNs based
on the TOA measurements is defined. Section 3 considers
the emitting source localization problem modeled as a LS
estimation problem. The Steepest Decent, the NewtonRaphson and the Gauss-Newton iterative algorithms for
the NLS estimation problem are presented and then
applied to estimate the position of the emitting source in
WSNs. In Section 4, the CRLB is derived for the analysis
of the localization accuracy. Section 5 gives the
simulation results and the performance analysis of the
proposed optimization methods for different signal-tonoise ratio levels. Finally, the conclusion and the future
work are drawn in Section 6.

2. PROBLEM FORMULATION
This section considers the localization problem in a twodimensional (2-D) space using the noisy TOA
measurements with the goal to accurately determine a
location of an emitting source. Localization requires at
least three sensors x l = [ xl , yl ] ,
T

l {1, 2,..., N }, with

known coordinates which are employed to estimate the


position of an emitting source, denoted by x = [ x, y ] ,

Picture 1. Illustration of the geometrical model based on


the noisy TOA measurements

where []T is a matrix transpose operation.

3. LEAST SQUARES METHODS

In the absence of noise, the true distance between the


emitting source and the lth sensor is expressed as
dl = x xl
=

( x xl )

+ ( y yl ) , l {1,2,...,N },
2

This section considers two approaches for determining the


location of an emitting source in the WSNs, based on the
least squares, particularly, the LLS and the NLS
estimation methods.

(1)

where 2 is stands for the 2-norm.

The NLS estimation method minimizes the objective


function J NLS ( x ) in order to find the estimated location

Therefore, the true distance between the emitting source


and the lth sensor defines a circle centered at the sensor

of an emitting source, where the objective function is


defined as the sum of the squared residuals and can be
expressed as

l {1,2,..., N }, with a radius d l , where


all the circles intersect at the same point T , as shown in
Picture 1.
x l = [ xl , y l ] ,
T

min2 J NLS ( x ) = min2 el2 ( x ),


xR

Furthermore, assuming that the noisy TOA measurements


are independent and identically distributed (i.i.d), the
estimated distance between the lth sensor and the emitting
source can be expressed as
(2)

( x xl )
l {1,2,...,N },
= rl

( x xl ) + ( y yl ) + nl , l {1,2,...,N } ,
2

(3)

l =1

where el is residual defined as the difference between,


the measured and the estimated distances, which can be
expressed as
el ( x ) = rl x l x

rl = c tl
= dl + nl

xR

467

+ ( y yl ) ,
2

(4)

OTEH2016

PERFORMANCEEVALUATIONOFNONLINEAROPTIMIZATIONMETHODSFORTOALOCALIZATIONTECHNIQUES

rl2 = ( x xl ) + ( y yl )

where rl is the measured distance at the l the sensor and


the estimated range

xl x

is obtained from the

rl2 = x 2 2xl x + xl2 + y 2 2yl y + yl2

Euclidean distance between the position of the l th sensor


x l and estimated position x .

2xl x 2yl y = rl2 x 2 y 2 xl2 yl2 ,

(6)

l {1,..., N }.

Therefore, the problem defined in Eq.(3) is an


unconstrained nonlinear optimization problem, where the
optimal emitting source location can be obtained as a
minimum of the objective function J NLS , that is

Subtracting the first equation from the remaining N 1


equations, the system of equations (6) can be transformed
into the linear matrix form

x* = argmin J NLS ( x ) .

Ax = b,

(7)

2 ( x2 x1 ) 2 ( y2 y 1 )

2 ( x3 x1 ) 2 ( y3 y 1 )
A=
,

#
#

2 ( x N x1 ) 2 ( y N y 1 )

(8)

r22 x 22 y22 r12 + x 21 + y12


2

r3 x 23 y32 r12 + x 21 + y12

b=
.

#
2
2
2
2
2
2
rN x N y N r1 + x 1 + y1

(9)

(5)

xR 2

where

This problem can be solved by numerical iterative


gradient search algorithms such as the Steepest Decent,
the NewtonRaphson and the Gauss-Newton methods.
Picture 2 shows, the objective function J NLS of the NLS
optimization method, given by Eq. (3), where it is
observed that the J NLS is a convex function with a single
unique minimum.

x 10

and

Then, the overdetermined system of linear equations (7),


can be solved directly for x by minimizing the following
optimization problem

f(x,y)

min2 Ax B 2 .

(10)

xR

From the Eq. (10), the closed form solution to the LLS
estimation problem can be obtained as follows

0
800
700

800

600

x LLS = A T A

700
500

600
500

400
300

A T b.

(11)

400
300

Therefore, the solution x LLS can be utilized as the initial


value for the NLS estimation problem.

Picture 2. Graph of objective function J NLS ( x )

3.2. Steepest Decent method


The Steepest Descend is a simple gradient based
optimization method, which is characterized by the first
order convergence rate and easy implementation that
finds a local minimum of a nonlinear objective function.

3.1. Formulation of LLS method


The LLS is the simple and commonly used method
because of its easy implementation and high
computational efficiency that provides a closed-form
solution, however with a lower accuracy. The concept of
the LLS method is to transform a system of nonlinear
equations to system of linear equations.

This method starts with an initial guess of the solution


and minimizes along the direction opposite to that of the
gradient of the objective function. The algorithm will
converge until it reaches the point that has a zero gradient.

Squaring the both sides of the Eq. (2) and neglecting


measurement errors {nl } yields the following

Iterative update rule of the Steepest Decent method in the


k + 1 th iteration is given as

overdetermined, nonlinear system of equations

x(

468

k +1)

( ) ),

= x ( ) k J NLS x (
k

(12)

OTEH2016

PERFORMANCEEVALUATIONOFNONLINEAROPTIMIZATIONMETHODSFORTOALOCALIZATIONTECHNIQUES

where k is nonnegative scalar that minimizes the

x(

k +1)

= x ( ) + x ( ) ,
k = 0,1,..., m,

objective function in the direction of gradient and


J NLS ( x ) is the gradient of the objective function, which
can be expressed as

where

J NLS ( x )

x
R2 .
J NLS ( x ) =
J NLS ( x )

is

( )

in the kth iteration and m is a

The Newton-Raphson method terminates the iteration if


the magnitude of the objective function J NLS ( x ) reaches

(13)

a predefined threshold

J NLS x (

k +1)

) J ( x( ) ) ,
( x( ) )
k

NLS

(19)

J NLS

(14)
where is a small nonnegative real number.

of the gradient of the objective function J NLS ( x ) is less

However, the Newton-Raphson algorithm may fail to


converge towards the optimal solution if the objective
function J NLS ( x ) is not convex quadratic and

or equal than sufficiently small positive constant .

3.3. Newton-Raphson method

corresponding Hessian matrix is not positive definite.


Moreover, the calculation of the Hessian matrix at each
iteration can be computationally expensive. Therefore to
overcome these problems, the Gauss-Newton method, is
applied and represented in the next subsection.

NewtonRaphson method is widely used technique for


solving the NLS problem, because it converges fast
towards the optimal solution with quadratic rate of
convergence. This method requires the computation of the
Hessian matrix 2 J NLS in each iteration, which is
expressed as follows

2 J NLS

k)

(18)

maximum number of iterations.

A practical choice of the stopping criteria for the iterative


procedure is to check if a norm
J NLS x ( k +1)

( )(

2 J NLS ( x ) 2 J NLS ( x )

x 2
xy
= 2
R 2 2
J (x ) 2 J (x )
NLS
NLS

y
x
y2

3.4. Gauss-Newton method


The Gauss-Newton method is a modification of the
Newton-Raphson method that requires only the first
derivative of the objective function J NLS ( x ) , making it

(15)

computationally less expensive in each iteration. It has


become a standard technique for the minimization of the
objective functions represented as a sum of squares of the
nonlinear functions [6].

where the Hessian matrix 2 J NLS is positive definite or


semi-positive define.

Iterative update rule of the Gauss-Newton method is


given as

Using the Taylor's expansion, the second order


approximation of the objective function J NLS ( x + x ) at

T
x ( k +1) = x ( k ) J J NLS ( x ( k ) ) J J NLS ( x ( k ) )

(20)
T
J J NLS ( x ( k ) ) e ( x ( k ) ) ,

the current iteration is defined as follows


J NLS ( x + x ) ( x ) = J NLS ( x ) + x J NLS ( x )
(16)
1
+ xT 2 J NLS ( x ) x,
2

where x is a small change at the given point x R 2 .

x = 2 J NLS ( x ) J NLS ( x ) .

is a residual vector and

the Jacobian matrix evaluated at


expressed as

(k )

( ( ) ))

J J NLS x

(17)

is

, which can be

x1 x 2 x1 x 2

x
y

J ( J NLS ( x ) ) =
#
#
.
x N x 2 x1 x 2

x
y

The search direction vector of the Newton-Raphson


method is obtained from the optimality condition of the
Eq. (16) as follows
( x ) = 0

( )

where e x ( k )

(21)

The iteration procedure is terminated as soon as the


gradient of the objective function J NLS ( x ) becomes

As a result, the iterative procedure of the NewtonRaphson algorithm is expressed as follows

sufficiently close to zero, so that

J NLS x ( k +1)

469

(22)

OTEH2016

PERFORMANCEEVALUATIONOFNONLINEAROPTIMIZATIONMETHODSFORTOALOCALIZATIONTECHNIQUES

where the gradient is measured in a suitable norm and


is a given threshold.

5. SIMULATION RESULTS
In this section, the localization performance of the
Steepest Decent, the Newton-Raphson and the Gauss
Newton methods is evaluated and compared with the
CRLB and the closed-form LLS method. In the
simulation environment, four sensors with known

4. CRAMER-RAO LOWER BOUNDS


In general, the Cramer-Rao lower bound sets the
theoretical lower bound for the variance ( or covariance
matrix) of any unbiased estimator [7].

[ 0, 0]T m, [2000,0]T m, [0, 2000]T m

coordinates

The CRLB is calculated as the inverse of a Fisher


information matrix (FIM) I ( x ) , which can be defined as

[2000, 2000]

and

m, are involved in the WSN. It is

conceivable to assume that the emitting source is located

follows

at [ 400,600] m. The TOA measurements are generated


T

ln f ( r x ) ln f ( r x )

I ( x ) = E


x
x

(23)

by adding white Gaussian noise with zero-mean to the


true position of the emitting source. In general, the
gradient methods require an initial point, and therefore the
closed-form solution x LLS of the LLS estimation problem
can be employed as the initial solution which is important
for the accuracy and efficiency of the proposed NLS
methods.

where is E ( ) the expectation operator and f ( r x ) is a

The root mean square error has been used as a standard


statistical metric to measure the localization performance
of the proposed algorithms, and is calculated as

ln f ( r x )
,
=E

xxT

joint density function that has the following form


f (r x ) =

( 2 )

N /2

RMSE =

1/2
T

exp ( r - d ( x ) ) C1 ( r - d ( x ) ) ,
2

(24)

Finally, the covariance matrix of any unbiased estimator


x of x is bounded by the CRLB obtained as an inverse
of the FIM as follows
(25)

In Picture 3 the CDFs of the LLS and the proposed


nonlinear optimization methods are plotted and compared
to CRLB, with SNR level set to 15 dB.

where

( x xl )( y yl )
d l2 i2

i =1

i =1

( x xl )
d l2 i2

(28)

The cumulative distribution functions (CDFs) of the LLS,


the Steepest Decent, the Newton-Raphson and the GaussNewton localization errors are compared with the CRLB
to evaluate the performance of the proposed localization
methods. Simulations are done with different levels of
SNR, SNR=15 dB and SNR=40 dB, respectively.

matrix.

2
N

( y yl )

d l2 i2

i =1
I=
N
( y yl )( x xl )

d l2 i2
i =1

the emitting source, respectively, and N = 100 is a


number of Monte Carlo simulation runs.

x ( n ) x 2

n =1

where x and x ( n ) are the true and estimated positions of

where C = diag 12 ... N2 is N dimensional covariance

cov ( x , y ) ( I ( x, y ) ) ,

1
N

SNR = 15 dB

. (26)

0.9
0.8
0.7
0.6

I +I
2
2
E ( x x ) + ( y y ) trace I 1 = 11 222 ,

I11 I 22 I 12

{ }

CDF

Based on the TOA measurements with independent zeromean Gaussian noise components the relation between
variance and the CRLB can be expressed as

0.5
0.4
0.3

(27)

LLS method
Steepest Descent method
Newton-Raphson method
Gauss-Newton method
CRLB

0.2
0.1

where x , y are the estimated values of x, y, respectively


and Iij represents the element of the FIM I ( x, y ) in the

500

1000

1500

2000

2500

RMSE (m)

ith row and jth column.

Picture 3. CDFs of the emitting source localization error


when SNR = 15dB
470

PERFORMANCEEVALUATIONOFNONLINEAROPTIMIZATIONMETHODSFORTOALOCALIZATIONTECHNIQUES

OTEH2016

From the results in Picture 5, it is evident that proposed


nonlinear optimization methods reach the CRLB accuracy
and are almost constant over a wide range of SNR values.

From Picture 3, it is observed that, all proposed NLS


estimators have similar localization accuracy, with
approximately 10% more simulated runs closer to the
CRLB compared to the LLS estimator.

Moreover, it is observed that the LLS estimator is not


very sensitive to SNR and exhibits the small threshold
effect under the high measurement error levels.

Picture 4 illustrates, the CDFs of the LLS and the NLS


methods which are compared to the CRLB, where SNR
level is set to 40 dB.

6. CONCLUSION

SNR = 40 dB

In this paper, the LLS and NLS approaches to the source


localization problem in the presence of noisy TOA
measurement have been considered. The NLS estimation
problem is solved using the gradient-based optimization
methods such as the Steepest Decent, the NewtonRaphson and the Gauss-Newton to estimate the unknown
location of an emitting source in WSNs. Simulation
results show that the proposed nonlinear optimization
methods outperform the LLS method and can achieve
higher localization accuracy over a wide range of SNR
values.

0.9
0.8
0.7

CDF

0.6
0.5
0.4
0.3

LLS method
Steepest Descent method
Newton-Raphson method
Gauss-Newton method
CRLB

0.2
0.1
0

10

15

20

25

30

35

40

As a future work, the developed optimization models can


be further extended with the additional energy efficiency
criterion, which can be solved by multi-objective
optimization.

45

RMSE (m)

Picture 4. CDFs of the emitting source localization error


when SNR = 40dB

References
[1] Yick,J., Mukherjee,B., Ghosal,D.: Analysis of a
Prediction-based Mobility Adaptive Tracking
Algorithm", Proceedings of the IEEE Second
International Conference on Broadband Networks,
(BROADNETS), Boston, 1 (2005) 753-760.
[2] Simi,M., Pejovi,P.: A comparison of three methods
to determine mobile station location in cellular
communication systems, Transactions on Emerging
Telecommunications Technologies, 20 (2008) 711721.
[3] Figueiras,J., Frattasi,S.: Mobile Positioning and
Tracking, John Wiley & Sons, United Kindom,
2010.
[4] Bisio,I., Cerruti,M., Lavagetto,F.: Marchese,M.,
Pastorino,M., et al.: A trainingless WiFi fingerprint
positioning approach over mobile devices, IEEE
Antennas and Wireless Propagation Letters , 13
(2014) 832835.
[5] Chalise,B., Zhang,Y., Amin,M., Himed,B.: Target
localization in a multi-static passive radar system
through convex optimization, Signal Processing, 102
(2014) 207215.
[6] Ioannis,K., Magren,A.: Local convergence analysis
of proximal GaussNewton method for penalized
nonlinear least squares problems, Applied
Mathematics and Computation, 241 (2014) 401408.
[7] Laaraiedh,M., Avrillon,S., Uguen,B.: Cramer-Rao
lower bounds for nonhybrid and hybrid localisation
techniques in wireless networks, European
Transactions on Telecommunications, 23 (2012)
268-280.

It can be seen from Picture 4 that the localization error


CDF of the LLS method is close to the CRLB in only
43% of the simulated runs. However, the CDFs of the
proposed nonlinear optimization estimators coincide and
they have better localization performance comparing to
the LLS estimator, where the estimation errors in over
60% of the simulated runs are close to the CRLB.
Comparing the results presented in Picture 3 and Picture
4, it can be observed that the higher level of SNR leads to
significantly improved localization performance.
Finally, the Picture 5 shows the comparison of the
localization accuracy of the LLS and the NLS methods
with CRLB for various SNR levels.
35

LLS method
Steepest Descent method
Newton-Raphson method
Gauss-Newton method
CRLB

30

RMSE (dB)

25

20

15

10

10

15

20

25

30

35

40

45

50

SNR (dB)

Picture 5. Comparison of RMSE versus SNR levels

471

GPU-BASED PREPROCESSING FOR SPECTRUM SEGMENTATION IN


DIRECTION FINDING
MARKO MII
University of Belgrade, Belgrade, marko.misic@etf.bg.ac.rs
IVAN POKRAJAC
Military Technical Institute, Belgrade, ivan.pokrajac@vs.rs
NADICA KOZI
Military Technical Institute, Belgrade, nadica.kozic@gmail.com
PREDRAG OKILJEVI
Military Technical Institute, Belgrade, predrag.okiljevic@mod.gov.rs

Abstract: Architecture of a modern direction finder is based on interception of radio signals in instantaneous
bandwidth that is considerably wider than bandwidth of the signals. In the wider instantaneous bandwidth, more
signals are active simultaneously. Additional processing is necessary in order to estimate the number of active emitters
and parameters of each emission. This additional processing is known as spectrum segmentation or signal preclassification. In order to improve quality of signal pre-classification, preprocessing is needed. Preprocessing can be
based on morphology operations. Nowadays, graphics processing units (GPUs) can be used for signal processing in
direction finding, since they offer vast computing power and bandwidth over central processing units (CPUs). GPUs
are especially suitable for problems with high regularity. In this paper, preprocessing based on morphology operations
have been implemented on the GPU using NVIDIA CUDA. Those operations include spatial filtering, erosion and
dilation with different structural elements. GPU and sequential implementation have been tested with different test
cases and significant speedups over sequential implementation have been observed. Finally, the results of the analysis
are briefly discussed.
Keywords: CUDA, direction finding, GPU, morphological operations, spectrum segmentation.

1. INTRODUCTION
Determination of emitter positions (emitter geo-locations)
has various applications in both civil and defense oriented
fields. In the defense applications, the determination of
emitter positions is very important in EW (Electronic
Warfare) systems and systems for gathering intelligence
data, such as the COMINT (Communication Intelligence)
system. Electronic support (ES), as a part of EW, provides
near-real-time information which can be developed into the
Electronic Order of Battle (EOB) used for situational
awareness [1]. In order to determine emitter position twostep techniques can be implemented in modern direction
finders (DFs). Two-step positioning techniques or indirect
methods are based on the estimation of a specified
parameter, such as the direction of arrival (DOA) or the time
of arrival (TOA) at each sensor. The estimated parameters
are sent to the central sensor in order to determine the emitter
location. Much of the work in this field, especially in earlier
days, focused on radio direction finding. Direction finding is
the process of estimating the direction of electromagnetic
waves impinging on one or more antennas.
Modern wideband direction finders (WDFs) process
signals in instantaneous bandwidth that is considerably
wider than bandwidth of the signals. In such case, there
are more signals active simultaneously with different
472

parameters such as frequency, bandwidth, and type of


emission. In order to estimate the number of active
transmitters in instantaneous bandwidth and parameters of
each detected emission some process of detection and preclassification have to be implemented in WDF. Data from
the WDF processor for DOA estimation are usually inputs
in the module for signal pre-classification. This additional
processing is also known as spectrum segmentation.
Spectrum segmentation is based on analysis of data
presented in time-frequency-azimuth spectrum and timefrequency-power spectrum. These spectrums consist of
time, frequency, azimuth, optionally elevation and power
or signal level parameters. In the open literature, little
research has been presented about the signals based on
wideband DF data [2], [3].
In order to achieve better results in the process of
spectrum segmentation some kind of data preprocessing
has to be introduced into modern WDF. An approach for
detection and segmentation based on image processing
has been presented in [4]. This approach is based on the
interpolation of the azimuth spectrum as a set of binary
images and the process of segmentation is implemented
with image processing algorithms applying classical
erosion and dilation operations with different structuring
elements. In this paper, we use this approach to

GPUBASEDPREPROCESSINGFORSPECTRUMSEGMENTATIONINDIRECTIONFINDING

OTEH2016

implement and evaluate data preprocessing step on the


graphics processing units (GPUs).

2. CONCEPT OF MODERN DIRECTION


FINDER

Graphics processing units have been intensively used in


general purpose computations for several years. In the last
decade, GPU architecture and organization changed
dramatically to support ever-increasing demand for
computing power [5]. Contemporary GPUs are manycore
processors, offering abundant data-level parallelism with
hundreds or thousands of cores available for execution.
Although they can achieve very high number of FLOPS,
they are suitable only for certain set of problems that
show high regularity. Therefore, GPUs are used as
accelerators to central processing units (CPUs). In
heterogeneous systems, CPU is in charge of I/O,
management tasks, or smaller portions of work, while
compute-intensive parts are offloaded to the GPU.

Architecture of a modern direction finder is based on


interception of radio signals in instantaneous bandwidth
that is considerably wider than bandwidth of the signals.
Modern wideband DF (WDF) consists of DFs antenna
array, acquisition channels and digital signal processing
block. The basic architecture of modern WDF is
presented in Picture 1 [10]. Acquisition channel include
RF frontend with tuners that convert RF signal to
appropriate intermediate frequency (IF), A/D convertors
and digital down convertors (DDC) to convert signal to
the baseband. Usually the signal acquisition in DF may be
performed using parallel or sequential techniques. In
parallel signal acquisition, the measurement of DOA is
nearly instantaneous and implements as many signal
acquisition channels as signals generated from antennas.
In the case of sequential signal acquisition, the number of
RF channels is lower than the number of antenna
elements in DF antenna array. The measurement of DOA
is available only after the end of a sequence involving RF
switching, either on the antennas or after phase and/or
amplitude weighting of signal from antennas. The basic
advantage of sequential signal acquisition is reduced
receiver hardware, which implies reduced cost, volume
and weight. However, there is a time-accuracy tradeoff
due to less data being gathered compared with a parallel
signal acquisition. In the sequential signal acquisition,
total time for signal acquisition depends of the number of
antenna subsets. In the part of simulator for generating a
signal on the DF antenna array, it is possible to generate
signal at antenna array using both techniques.

Numerous papers report high speedups in various


domains [6], both in scientific and commercial
applications. Successful implementations have been found
in the domain of signal processing, for audio, image, and
video processing purposes. Because of our previous
experiences [5], [7], [8], we decided to accelerate
preprocessing step of spectrum segmentation using GPUs.
We used NVIDIA CUDA programming model which is
the most mature and widely used programming
environment for GPU computing [9].
The paper consists of six sections. An introduction is
given in Section I. Concept of modern direction finder is
given in Section II. More details on preprocessing for
spectrum segmentation are given in Section III. A short
overview of the GPU architecture and CUDA
programming models, and implementation details of
preprocessing for spectrum segmentation on GPU is
presented in Section IV. Experimental results are
discussed in the fifth section. Conclusion and directives
for future work are given in the final section.

Picture 1. Basic architecture of wideband direction finder

473

OTEH2016

GPUBASEDPREPROCESSINGFORSPECTRUMSEGMENTATIONINDIRECTIONFINDING

Modern WDF has to provide very high probability of


interception of signals. To fulfill this requirement, it is
necessary to increase the instantaneous bandwidth or to
decrease time for the data processing by increasing
processing speed and processing power. In order to
provide wider instantaneous bandwidth, at each
acquisition channel is possible to form more than one
Digital Down Convertor (DDC). Using this approach, it is
possible to provide instantaneous bandwidth of wideband
DF that corresponding to the number of DDC and the
output sample rate of DDC.

are a Gaussian distribution. In order to eliminate this false


detection image processing algorithms that apply classical
erosion and dilation procedures with different structuring
elements have been proposed in [4]. Picture 5 presents
time-frequency-azimuth spectrum after image processing
with line structuring elements of dimension M[1,7].
Image processing is performed through erosion and
dilation, which are known in literature as morphological
operations. In our research, we are interested in binary
erosion and dilation. Binary erosion and dilation are used
for image processing with flat, 1D or 2D structuring
elements. We used definitions given in [11]. Binary
erosion can be defined as a set of pixel locations z in
image A, where the structuring element B translated to
location z overlaps only with foreground pixels in A:

3. PREPROCESSING FOR SPECTRUM


SEGMENTATION
In modern WDF, it is challenge to implement efficient
algorithm for emission classification. One of the most
important challenges for WDF is wideband signal
detection without dependencies on signal level. To
perform classification of emission, such as fixed
frequency, low probability of interception (frequency
hopping and direct spectrum), chirp, or burst, an
algorithm for pre-classification or spectrum segmentation
has to be implemented in WDF.

{(

)(

( AB )( x, y ) = min A x + x , y x , y DB

(1)

Binary dilation can be defined as a set of pixel locations z


in image A, where the reflected structuring element B
translated to location z overlaps only with foreground
pixels in A:

{(

)(

( A B )( x, y ) = max A x x , y x , y DB

The main goals of the pre-classification module are the


classification of types of transmission, the estimation of
duration of emissions, the averaging of DOA and the
determination of the quality of measurement. The signal
pre-classification has to be performed at all WDF. After
pre-classification, the next step is to determine position of
transmitter based on estimated DOAs from all WDF,
separate and group different emissions to emitters, and to
form radio-networks from the emitters. This process is
complicated for several reasons: the emissions are
sporadic, consisting of short bursts of intercepted
emissions with log quiescent intervals. In addition,
emitters can be used in different radio-networks from the
same location. Separating and grouping different
emissions into emitters and radio-networks have been
performed in the module for forming the Communication
order of Battle - COB.

(2)

The structuring element reflects through its center. In the


case of the 1D structuring element, it is reversed. For a
2D structuring element, reflection operation rotates the
element 180 degrees around its center. In both cases, DB
is the domain of the structuring element B and elements
A(x,y) outside of the domain are not considered.
After image processing, it is possible to perform
algorithm for time-frequency-azimuth spectrum analysis
in order to perform spectrum segmentation. The process
of the spectrum segmentation is possible to perform at
each spatial band, or to form joint time-frequencyazimuth spectrum after filtering all bands of interest.
Also, spectrum segmentation can be performed using
different structuring elements for image processing,
depending on the type of signal, e.g. fixed frequency or
frequency hopping.

An algorithm based on emitter detection in the timefrequency-azimuth spectrum is presented in [4]. Emitter
detection is regarded as the same task as object
identification and segmentation of images. The
transformation of a time-frequency-azimuth spectrum into
a set of images is achieved by dividing time-frequencyazimuth spectrum into equal interval with some overlap.
Picture 2 and Picture 3 show time-frequency-power
spectrum and time-frequency-azimuth spectrum obtained
at a HF WDF.
In the next step time-frequency-azimuth spectrum is
spatially filtered in band 220-240 without energy
detector. Picture 4 presents time-frequency-azimuth
spectrum after spatial filtering. Based on results, it can be
concluded that emissions with estimated DOAs outside
band of interest are discarded.
However, there are a lot of frequency bins inside this
spatially band that can cause false detection during signal
pre-classification. At these frequency bins there are only
noise components and because of that estimated DOAs

Picture 2. Example of time-frequency-power


spectrum from HF WDF

474

GPUBASEDPREPROCESSINGFORSPECTRUMSEGMENTATIONINDIRECTIONFINDING

OTEH2016

performance optimization, but they pose a problem for


non-experts in the field of computer science. On the other
hand, high-level, directive-based APIs exist, such as
OpenACC, in which compiler is responsible for code
generation. In both cases, some knowledge of GPU
architecture is desirable, since performance optimization
is a challenging task on the GPU. Performance can vary
greatly depending on the resource constraints of the
particular device architecture, and it is much on the
developer to exploit all parallelism available [9].
A typical GPU consists of several streaming
multiprocessors (SMs) that are able to execute a large
number of lightweight threads. Each SM consists of a
number of scalar processors. Typically, CUDA programs
are executed in co-processing fashion. Sequential parts
are executed on the host CPU, while compute-intensive
parts are offloaded to the GPU for parallel execution, as
special function called kernels. Kernel execution is
organized as a grid of thread blocks. Threads are executed
in a SIMD fashion. Threads in a thread block can be
synchronized using a barrier, but global synchronization
is achieved only when kernel execution terminates.
Threads from different thread blocks cannot cooperate,
since they may or may not execute on the same SM.
Available resources (registers, shared memory) are shared
by all thread blocks executed on particular SM.

Picture 3. Example of time-frequency-azimuth spectrum


from HF WDF

GPU memory architecture is designed to support high


throughput and execution of a number of threads in
parallel. It consists of a hierarchy of memories that differ
in speed and capacity. Global DRAM memory accesses
are slow, so threads can utilize other smaller memories to
speed up the execution, such as registers, shared memory,
constant memory, etc. Each thread has an exclusive
access to allocated registers and local memory. Threads in
a block could share data through, user-managed, perblock shared memory. Typically, CPU and GPU use
different, physically separated memory spaces, so explicit
transfers of data between CPU and GPU are needed.

Picture 4. Spatially filtering of time-frequency-azimuth


spectrum in band from 220-240

GPU based preprocessing for spectrum segmentation was


implemented as a dynamic linking library (DLL) for
National Instruments LabVIEW software package.
Preprocessing is performed through library calls that are
provided with necessary data: estimated azimuths,
intervals of interest, signal levels, thresholds, structuring
elements, etc. Estimated azimuths are stored in memory
as 2D, MxN matrix, where M is number of acquired
frames and N is the number of frequency bins in the
instantaneous bandwidth.
Preprocessing is implemented in five steps, as shown in
Picture 6. Initialization step is used to allocate CUDA
objects, initialize the data structure used in other library
calls, and transfer the data to the GPU. Finalization phase
is used for cleanup purposes and transferring of results to
the host. In all other steps, several CUDA kernels have
been implemented.

Picture 5. Result of image processing for time-frequencyazimuth spectrum in spatial band 220-240

4. GPU BASED PREPROCESSING FOR


SPECTRUM SEGMENTATION

Frequency filtering step is used to filter out only azimuth


estimations in desired frequency range. Signal level
filtering is used optionally to keep only those estimations
above given signal threshold. Both operations were
implemented as kernels with 2D grid organization in

GPUs are mainly programmed through extensions of


commodity programming languages such as C, C++, and
Fortran. Low-level API-s, such as CUDA and OpenCL
offer significant control over program execution and
475

OTEH2016

GPUBASEDPREPROCESSINGFORSPECTRUMSEGMENTATIONINDIRECTIONFINDING

similar fashion. Each thread accesses data in global


memory and performs one of aforementioned operations
for one azimuth estimation.

5. EXPERIMENTAL RESULTS
In this section, we present the performance analysis of our
implementation. It is evaluated on Intel Core 2 Duo
E7600 3.06GHz with 4 GB RAM using NVIDIA Quadro
K2000 GPU with 2 GB RAM and 384 CUDA cores on
Windows 7 OS. We used five different signal sizes and
three different structuring elements. Test signals had 50,
100, 200, 500, and 1000 frames with 7168 frequency
components, respectively. Structuring elements used were
1x3 line element, 3x3 square element, and 5x5 diamond
element. All signals were spatially filtered with azimuths
in the range 100o-200o with 20o slices and 5o overlap. The
results are shown in the following pictures.
Picture 7 shows execution times for all kernels of the
preprocessing step of spectrum segmentation. It includes
execution times of three central steps, shown in Picture 6,
excluding allocation, initialization, initial CPU-GPU
memory transfer, and finalization steps. Execution times
show good scaling of the code, both with the increase in
the number of frames and the size of the structuring
elements.

Picture 6. Preprocessing steps in implemented solution

Allkernels
Time[ms]

The fourth phase implements spatial filtering and it is


much more complex than the others. As described in
Section II, time-frequency-azimuth spectrum is equally
divided into a set of images with some overlap. Each
image is obtained by azimuth range filtering kernel,
implemented in similarly to kernels in steps 2 and 3. Each
image is processed with erosion and dilation
morphological operations with given structuring elements.
Processed images are combined to final result with a
separate kernel, and final kernel call is used to average the
results. The final step is needed because of the overlapped
segments in final image.

350
300
250
200
150
100
50
0
50

100

200

500

1000

Numberofframes
Line,1x3

Square,3x3

Diamond,5x5

Picture 7. Execution times for all preprocessing kernels

Implemented morphological operations share the similar


structure. We used 2D kernel organization. Each thread is
in charge for one pixel of the resulting image. It iterates
through the given structuring element and corresponding
pixels of the image. Depending on the operation, local
minimum for erosion or local maximum for dilation is
calculated.

During the evaluation, we noticed that erosion and


dilation kernels showed similar behavior and execution
times. It is understandable, since binary erosion and
dilation used in our implementation do not differ
considerably due to the reasons explained in previous
section. For that reason, we presented only results of the
erosion operation in Picture 8 and Picture 9.

Morphological operations are the most time consuming


part of the solution. For that reason, we used several
optimizations in their implementation, mostly to avoid
expensive global memory accesses. The same structuring
element is accessed by each thread. They are usually
small in size, up to several hundred elements. Therefore,
we stored it to constant memory, where accesses are
cached and broadcasted [9]. Constant memory is readonly CUDA memory, 64KB in size. Since neighboring
pixels of the image are used several times by different
threads, it was feasible to store them in shared memory. In
the beginning of the kernel, threads in a thread block
cooperatively read a subset of the image. Subset of the
image is larger than the thread block, as border elements
are also loaded. Those elements are sometimes called
shadow or halo, and the approach is known from stencil
computations [12]. The size of the border depends on the
dimensions of the structuring element.

Profiling of the GPU code showed that erosion and


dilation kernels are the most time consuming. Therefore,
we analyzed the scaling of those operations depending on
the number of frames and structuring element used.
Picture 8 illustrates similar tendencies for this kernel, as
observed in Picture 7. The size of the structuring element
affects the performance, and the execution time tends to
increase slower for larger structuring elements. It is
understandable, as kernel launch overheads are not
negligible for smaller structuring elements.
Finally, we have analyzed the obtained speedup of GPUbased erosion and dilation over their CPU-based,
sequential counterparts using the same test machine. We
used those sequential implementations as a base for our
GPU implementations. The results are presented in
Picture 9. We observed significant speedups from 20 to
476

GPUBASEDPREPROCESSINGFORSPECTRUMSEGMENTATIONINDIRECTIONFINDING

50 times over sequential implementations. Such high


speedups come from the nature of the problem.

OTEH2016

been used in stencil computations. Also, hardware support


through NVIDIA performance primitives can be explored
to further improve performance erosion and dilation
operations.

ErosionoperationGPU
25

REFERENCES

Time[ms]

20

[1] Pokrajac,I., Okiljevic,P., Vucic,D.: Fusion of multiple


estimation of emitter position, Scientific Technical
Review, Belgarde, vol. 62, no. 3-4, 2012, pp. 4-11.
[2] Quint,F., Reichert,J., Roos,H.: Emitter detection and
tracking algorithm for a wide band multichannel
direction-finding system in the HF-Band, IEEE,
1999, pp. 139-142.
[3] Pokrajac,I., Eric,M., Dukic,M.: An algorithm for
parameter estimation of frequency hopping emitters
and their separation and grouping in unique radio
networks, Scientific Technical Review, Military
Technical Institute, Belgrade, 2004, vol. 54, No.3-4,
pp. 15-23.
[4] Raps,F., Kollmann,K., Zeidler,H.C.: HF-BAND
emitter detection and segmentation based on image
processing, IEEE, 2001, pp. 428-431.
[5] Misic,M., Djurdjevic,D., Tomasevic,M.: Evolution
and Trends in GPU computing, Proc. of the 35th
International Convention MIPRO, Abbazia, Croatia,
2012., pp. 289-294
[6] Nickolls,J., Dally,W.J.: GPU Computing era, IEEE
Micro, vol. 30, No. 2, 2010., pp. 5669
[7] Nikolov,D., Misic,M., Tomasevic,M:. GPU-based
implementation of reverb effect, 23rd. TELFOR,
IEEE, 2015, pp. 990-993.
[8] Misic,M., Dasic,D., Tomasevic,M.: An analysis of
OpenACC programming model: Image processing
algorithms as a case study, Telfor Journal, vol 6, no.
1, 2014. pp.53-58.
[9] Kirk,D.B., Hwu,W.M.: Programming Massively
Parallel Processors: A Hands-on Approach, Morgan
Kaufmann, 2010.
[10] Tuncer,T.E., Friedlander,B.: Classical and modern
direction-of-arrival estimation, Academic Press,
USA, 2009.
[11] Gonzalez,R.C., Woods,R.E., Eddins,S.L.: Digital
Image Processing Using MATLAB, Gatesmark
Publishing, 2009.
[12] Schfer,A., Fey,D.: High performance stencil code
algorithms for GPGPUs, Procedia Computer
Science, vol. 4, 2011, pp. 2027-2036
[13] Rane,M.A.: Fast morphological image processing on
GPU using CUDA, MSc thesis, College of
Engineering, Pune, 2013.
[14] Koay,J.M., Chang,Y.C., Tahir,S.M., Sreeramula,S.:
Parallel
Implementation
of
Morphological
Operations on Binary Images Using CUDA,
Advances in Machine Learning and Signal
Processing, Springer International Publishing, 2016,
pp. 163-173.

15
10
5
0
50

100

200

500

1000

Numberofframes
Line,1x3

Square,3x3

Diamond,5x5

Picture 8. Execution times for erode operation


on the GPU

Erosionoperationspeedup
60

Speedup

50
40
30
20
10
0
50

100

200

500

1000

Numberofframes
Line,1x3

Square,3x3

Diamond,5x5

Picture 9. Observed speedup of GPU-based erosion


operation over sequential implementation
Morphological operations are embarrassingly parallel, and
therefore they benefit from execution on SIMD
architectures, such as GPU. Similar speedups were
observed in [13] for grayscale images and in [14] for
binary images. Speedups tend to increase with the number
of frames and the size of the structuring elements. As
mentioned before, increase in the number of frames
decreases the significance of kernel launch overheads.
The size of structuring element plays more important role,
since GPU kernels yield better performance with the
increased arithmetic intensity. Each thread has more work
to perform, but more work is performed per one global
memory access, since shared memory is used to
cooperatively load elements for all threads in a block.

6. CONCLUSION
In this paper, we presented our experience with
implementation of spectrum segmentation preprocessing
for WDF on the GPU. Preprocessing step is important in
modern WDF, as it simplifies signal pre-classification.
We observed excellent scaling of GPU kernels execution
times and significant speedup of GPU-based erosion and
dilation morphological operations over sequential
implementation.
There are several directions for future work. Auto-tuning
frameworks can be used for implementation of
morphological operations, since similar approaches have
477

TECHNIQUES FOR INTELLIGENCE DATA GATHERING IN MOBILE


COMMUNICATIONS
SAA STOJKOVI
Arbor Education Partners, Belgrade, sljstojkovic@gmail.com
IVAN TOT
Ministry of Defence of Serbia, Belgrade, totivan@gmail.com
FEJSOV NIKOLA
RT - RK, Belgrade, fejsov@sbb.rs

Abstract: With new types of services that provides their users multi-medial type of data transfer, mobile communication
market and mobile phones market has expand even more in just couple of years. We are witnessing the continuos
expand of number of users, mobile companies and applications that are using new type of protocols and data formats.
This growth in mobile communication users can provide a fair field for intelligence data gathering. The aim of this
paper is explanation of key factors and possibilities in execution of data collection in open or encrypted
communication. Also, this paper is explanation of practical usage of intelligence resources than theoretical explanation
of mobile communications, protocols and security encryption measures.
Keywords: mobile, communications, intelligence, data gathering, data manipulation.
-

1. INTRODUCTION
Systematisation and classification of techniques for
intelligence data collection was defined throughout this
paper. The analysis of the gathering process has been
done, given the fact of consisting levels of protection, and
the analysis of existing technologies for execution of data
gathering techniques.

2. ANALISYS
Based on all challenges that this project had, it was
concluded that best way for realisation of this paper is to
use test environment and open source softwares.
Environment was established with using:
local virtual Wi-Fi network,
lap - top computer with Linux Backtrack operation
system that has average performances
mobile phone with Android operation system
open source software thats free for download and
usage

Data decryption is necessary when there is data that has


been gathered from coded data emissions, so data
decryption techniques has been explained in one
separated chapter. Based on defined factors, objective
comparison of existing data techniques has been also
written.
Finally, in one whole chapter the explanation of two
practical test of data gathered has been done. Also in this
paper the attention is on regularisation of data collection
in open communication protocols.
The aim of this paper is explanation of key factors and
possibilities in execution of data collection in open or
encrypted communication. One of the aim is also the
expanding of mobile communication security based on
results and explorations of this paper.

The goal for creation of test environment was to track


every part of communication process and to avoid
breaking of data protection laws. Every act of data
intercept which is not authorised by the official
intelligence agencies is considered a direct breaking of
personal data protection law.

Also, the most important thing that caries any data


collection process executed by security agencies is
prevention of criminal activities.

Also, usage of lap - top that has average performances is


to prove that data gathering can be done by anyone so
usage of known security measures are necessity in
todays mobile communications.

Challenges that occurred with this project:


-

design the safe test environment,


present gathered data in readable format,
define rules of data gathering processes,
find out which state laws define data gathering
processes in mobile communications.

combine data gathering with data manipulating


techniques that could be used for mobile
communications,

Because there are so many ways to gather and process


data, one of the main goals was to specify the most
478

OTEH2016

TECHNIQUESFORINTELLIGENCEDATAGATHERINGINMOBILECOMMUNICATIONS

efficient ways for data gathering. Every known technique


that can be done by using some sort of open source
software was tested and given objective grade by how
many useful data can one bring in objectively short period
of time. Also one of the main goal was to specify each of
the most productive processes of data gathering so that
same approach is always used for the sake of training
intelligence operators who can have smaller technical
knowledge.
The final product of this paper was to make the data
gathering and manipulation manual in order to establish
doctrine and strategy that can be in used every situation
that requires some kind of mobile communication
intelligence.

Picture 2. Mobile network sniffing


3.1.2. Wi - Fi network sniffing

3.1. Passive techniques for data gathering

The second most common way of mobile communication


is trough wireless networks. Wireless networks are widely
used not just by mobile phones, but also by wide range of
devices such as: cameras, printers, computers, and even
modern cars. Because of its transparency, Wi - Fi
networks like mobile networks must provide fast and safe
data transfer to its users. Todays Wi - Fi networks are
relatively to most safest type of data transfer.

The passive techniques for data gathering are all


techniques that consist of data interception that doesnt
brake chain of communication between user and devices
that provides communication service (router, LAN switch,
base station of mobile providers, etc.).
These techniques are not so efficient, but they provide
confidentiality of data gathering processes. By not
intercepting the chain of communication, a station for
data gathering is safe to use.

There are a lot of security protocols and encryption


systems that are imbedded into Wi - Fi network
communication. Most of these protocols and encryption
are like mobile networks using symmetric algorithms. The
most common is PSK (Pre Shared Keys). The logic in
PSK is to have keys that are well known and by using
them devices should encrypt all communication between
them. Keys for PSK wireless networks are: networks
name and networks password that can be a 133
characters long for newer WPA2 - PSK.

The most efficient passive data gathering techniques are:


3.1.1. Mobile network sniffing
As the leading communication medium, mobile networks
must have fast algorithms for data security. Thats the one of
the main vulnerability of mobile networks. Usage of fast
algorithms is efficient but not the safest way of data security,
because the fastest algorithms are the easiest to penetrate.

The most efficient way of Wi - Fi sniffing is by using


Wireshark. There are a lot of open source solutions for
network sniffing, but Wireshark is the most used, has
most intuitive user interface, its easy to operate, and has
a lot of documentation about usage and FAQ.

Most of todays mobile network providers are using the


A5 algorithm for data security. Firstly, the A8 algorithm
uses SIM card number and random 128 bit key, which
was generated on userss mobile device for making a KI key for encryption which is used by A5 algorithm. This
might seems a complicated process, but A8 and A5 are
symmetric algorithms, which means that they use the
same key for encryption and decryption of data
transferred from users device to base station. The logic of
data encryption is on picture 1.

Wireshark is free for download and for use. Wireshark


doesnt have options for decryption communications, but
it has tools for detailed analysis of packet transferred.
Main screen of Wireshark is presented on picture 3.

Picture 1. Logic of Data Encryption


The only thing that operator must do is to sniff when
users mobile device establishes communication with base
station. At that moment, the base station sends random
key and receives the encrypted key which will be latter on
used for all data encryption for that particular
communication session. The logic of mobile network
sniffing is presented on picture 2.

Picture 3. Main screen of Wireshark

479

TECHNIQUESFORINTELLIGENCEDATAGATHERINGINMOBILECOMMUNICATIONS

OTEH2016

When malicious software is deployed on targeted device,


it starts to leak intelligence data when user establishes
connection. Malicious software can be programmed to be
triggered by specific users action, so wide usage of them
is possible. Most common attack, Man In The Middle
Attack is displayed on picture 5.

3.2. Active techniques for data gathering


Active techniques can be considered any data interception
that has a direct communication interference between
targeted network devices. This kind of data gathering can
result to direct exposure of data gathering devices. In
these techniques, that can be considered as assaults on
mobile and wireless networks, operator must emit
electromagnet waves in order to retrieve intelligence data.

4. CONCLUSION

The most common, and the most safest way of active


intelligence data gathering is trough usage of malware
software that can produce the leakage of confidential data
from targeted device. Malicious software can be deployed on
target device trough internet, cloned local wireless networks
or by sending messages. One of the disadvantages of these
approaches is that is relatively hard to target just one device.
When deploying a malicious softwares the goal is to infect as
many devices possible. One of the possible option for
deployment of malicious software can be trough e-mail,
which is displayed on picture 4.

Main goal of this paper was to form a standard procedures


for intelligence data gathering. By using these techniques,
operators can provide intelligence data for latter analysis
deducted by intelligence organisations. Also, one of the
side goals was to upgrade security systems that can deal
with similar security breach attempts.
This paper is just a proposal of most eficient and tested
ways to gather intelligence data. This paper is not fully
complete because it lacks data analysis and comparison of
data gathering techinques. These suggestions would
increase the quality of this paper but they would also
make this paper exceed its range.

References
[1] Sauter,M.: From GSM to lte: an introduction to
mobile networks and mobile broadband, 2011 John
Wiley & Sons Ltd, The Atrium Southern Gate
Chichester United Kingdom.
[2] Rackley,S.: Wireless Network Technology, 30
Corporate Drive, Suite 400, Burlington MA United
Kingdom.
[3] Trabelsi,Z., Hayawi,K. Braiki,A.A., Sujith,S,M,:
Network attacks and defenses, Taylor & Francis
Group 2013, Broken Sound Parkway NW USA.
[4] Pleskonji,D., Maek,N., orevi,B., Cari,M.:
Sigurnost raunarskih sistema i mrea, Mikro knjiga
11030 Beograd, R. Srbija 2007.
[5] Davies,J.: Implementing SSL/TLS using cryptography
and PKI, Wiley Publishing Inc. 2011, Indiana USA.
[6] Chappell,L., Combs,G.: Wireshark network analysis,
Protocol Analysis Institute Inc. 2012, Chappell
University San Jose CA USA.
[7] Holz,T., Bos,H.: Detection of intrusion and malware,
and vulnerability assessment, Springer Heidelberg
Dordrecht London UK 2011.

Picture 4. Deployment of malicious trough e-mail

Picture 5. Man In The Middle Attack

480

AN IMPLEMENTATION OF MANET NETWORKS ON COMMAND POST


DURING MILITARY OPERATIONS
VLADIMIR RISTI
Military Academy, Belgrade, vladarist@gmail.com
BOBAN Z. PAVLOVI
Military Academy, Belgrade, bobanpav@yahoo.com
SAA DEVETAK
Military Academy, Belgrade, sasa.devetak@va.mod.gov.rs

Abstract: This paper proposes a using of Mobile Ad hoc Networks (MANET) for the needs of command during Military
operations. Mobile Ad hoc Networks are highly dynamic networks characterized by the absence of physical
infrastructure. Nodes of these networks functions as a router which discovers and maintains the routes to other nodes in
the network. In such networks, nodes are able to move and synchronize with their neighbors. Due to mobility,
connections in the network can change dynamically and nodes can be added and removed at any time. The nodes are
free to move about and organize themselves into a network. Riverbed Modeler Academic Edition 17.5 simulation
software was used during this research.
Keywords: MANET, Military operations, Riverbed Modeler.
using Riverbed Modeler Academic Edition 17.5. The rest of
the paper is organised as follows: Second section presents a
brief overview of MANET and MANET routing protocols.
Third section describes the simulation environment.
Simulation results and performance alalysis are shown in the
fourth section, and in fifth section conclusion is drawn.

1. INTRODUCTION
Serbian Armed Forces telecommunication system is
designed for the information distribution during different
phases of military operations. It is intended for collection,
transfer, protection, electronic processing, displaying, storing
and
distribution
of
information.
The
modern
telecommunication systems implementation provides
different end-users services, among others - data transfer.

2. MANET

The system provides the transmission of information for


command forces involved in the operation. The essential
characteristics of the system include working in terrain
conditions through mobile system. The elements of the
system enable the monitoring of commands and units in the
area and provide the necessary telecommunications capacity
and customer services.
The military IP network can be based on fixed, wired
infrastructure, utilizing networking equipment in operation
and command centers. But it isn't practical or even possible
to create a fixed, wired network infrastructure on a
battlefield. The practical way to provide a networking
infrastructure is to create a mobile wireless network.

Mobile ad hoc network is a collections of wireless nodes that


can allow people and devices to communicate each other
without the help of an existing infrastructure. Each device in
a MANET is free to move independently in any direction,
and will therefore change its links to other devices
frequently. In MANET the mobile node can work as a router
or host. Because they are independent of any infrastructure
therefore are used in military communication and rescue
operations where the development of an infrastructure is
neither feasible nor cost effective.
There are three categories of Mobile ad hoc network routing
protocols, which are proactive (or periodic) routing protocols
(DSDV, OLSR), reactive (or on demand) routing protocols
(DSR, AODV, TORA) and hybrid routing protocols (ZRP).
Proactive routing protocols maintain the routes independent
of the traffic pattern. On the other hand, reactive routing
protocols determine the route only if it is needed. Hybrid
routing protocol combines the advantages of proactive and
reactive routing. Reactive routing protocols have less
overhead since routes are maintained only if they are needed,
but proactive routing protocols have higher overhead due to
continuous route updating [1].

At the command posts there is Local area network (LAN),


which is networked via mobile communication center for
data transfer. The possibility of replacing the local area
network with the Mobile ad hoc network (MANET) was
considered in this paper. The middle level of command in
military operations requires network with 19 nodes, so
simulation was done with this number of nodes.
MANET network is analyzed by monitoring parameters of
AODV and OSLR routing protocols. The performance is
analyzed by means of throughput, network load and delay by

AODV is a reactive routing protocol, which does not


discover or maintain a route until or unless there is
481

OTEH2016

ANIMPLEMENTATIONOFMANETNETWORKSONCOMMANDPOSTDURINGMILITARYOPERATIONS

request by nodes. AODV uses destination sequence


number to ensure the loop avoidance and route freshness.
AODV is capable of both unicast and multicast routing.
The protocols function is divided into two operations:
route discovery and route maintenance. When a node
requests to communicate with another node, it starts the
route discovery mechanism. The source node sends a
route request message (RREQ) to its neighbors and if all
those neighbor nodes do have any information about the
destination node, they will further send the message to its
neighbors and so on until the destination node is found.
The node which has information about the destination
node sends a route reply message (RREP) to the initiator
of the RREQ message. The path is recorded in the
intermediate nodes in the routing table and this path
identifies the route. When the initiator receives the route
reply message the route is ready and the initiator can start
sending the packets. The route error (RERR) is reported
when the link to the next hop breaks [2].

node in the network was configured to perform identical


routing protocol. The scenario was run for 900 seconds.
For the analysis a constant Ftp (high load), Email (high
load) and Video conferencing (low resolution video)
traffic were generated in the network. Mobility model and
wireless network parameters were identical for all of the
nodes. The velocity of each mobile node was defined
between 0 and 10 m/s. The Receiver Group object was
used to limit the possible set of receivers based on the
distance between nodes. Distance threshold was set to a
value of 250 meters. All nodes were equipped with
transponders that use the IEEE 802.11g standard in
wireless communication with the data rate of 11 Mbit/s.
Table 1 shows the simulation parameters.
Table 1. Simulation parameters
Simulation parameters
Number of nodes
Simulation time
Simulation area

OLSR is a proactive link-state routing protocol, which


uses hello and topology control (TC) messages to
discover and then disseminate link-state information
throuh the mobile ad hoc network. Individual nodes use
this topology information to compute next hop
destinations for all nodes in the network, using shortest
hop forwarding paths. Using hello messages the OLSR
protocol at each node discovers 2-hop neighbor
information and performs a distributed election of a set of
multipoint relays (MPRs). Nodes select MPRs which
provide a path to each of its 2-hop neighbors via a node
selected as an MPR. These MPR nodes, then forward TC
messages. This function of MPRs makes OLSR unique
from other link-state routing protocols. The main goal of
using the MPR nodes is to reduce the number of
broadcasts in the same region. Each node selects a set of
neighboring nodes within a one hop (MPR set of nodes).
Adjacent nodes of a given node that are not selected as
MPR nodes handle packets, but does not forward them.
Only MPR nodes forward packets [3].

Routing protocols
Mobility model
Application

Value
19 mobile nodes
900 seconds
1000 m x 1000 m,
100 m x 100 m
AODV, OLSR
Created path
Ftp, Email,
Video conferencing

Wireless physical
characteristics
Data rate

IEEE 802.11g
11 Mbit/s

The performance of the simulated results is analyzed


according to different performance metric. The following
performance metrics are employed in this study:
Delay represents the time between the packet creation
at the source node and its reception at the destination
node.
Throughput is the ratio of the total amount of data
packets that reaches the receiver in a given time
period.
Network load represents the total load in bit/sec
submitted to wireless LAN layers by all higher layers
in all WLAN nodes of the network.

3. SIMULATION

4. PERFORMANCE ANALYSIS

The aim of the simulation is to compare network


performances of the AODV and OLSR routing protocol in
different conditions within the network. The research was
conducted using network simulator Riverbed Modeler
Academic Edition 17.5 [4].

Two network scenarios with different simulation area within


Riverbed Modeler Academic Edition 17.5 were created for
this research. Performance were analysed for constant Ftp,
Email and Video conferencing traffic in the network.

Riverbed Modeler provides a modeling and simulation


environment for designing communication protocols and
network equipment. Riverbed Modeler Academic Edition
17.5 is a limited-feature version for educational users who
wants to utilize simulation software for networking
classes. Riverbed Modeler Academic Edition incorporates
tools for all phases of a study, including model design,
simulation, data collection, and data analysis.

4.1. Ftp and Email application


Network with constant Ftp (high load) traffic is analysed
based on the results of network load, throughput and
delay in the scenario with 19 mobile nodes in an area of
1000 m x 1000 m.
Table 2. Simulation results for Ftp and Email in an area
of 1000 m x 1000 m

Two simulation scenarios were created with the different


simulation area. For both scenarios 19 mobile nodes were
used to assess the performance of MANET network. In
the first scenario, nodes are randomly placed in the area
of 1000 m x 1000 m. In the second scenario simulation
area is decreased to 100 m x 100 m. The network
topology is based on the usage of mobile nodes. Each

Routing
Throughput Network
Application
protocol
(bit/s)
Load (bit/s)
FTP
40277
38290
AODV
Email
14229
11286
FTP
78681
48268
OLSR
Email
52324
21945
482

Delay (s)
0,001043
0,000282
0,000385
0,000198

ANIMPLEMENTATIONOFMANETNETWORKSONCOMMANDPOSTDURINGMILITARYOPERATIONS

OTEH2016

routing protocols. Values for OLSR routing protocol in


use are much better than the values for AODV.
Values for the delay in a simulated network with 19 nodes
in an area of 1000 m x 1000 m, with constant Ftp and
Email traffic, are very small.
The simulation results of network with 19 mobile nodes
in an area of 1000 m x 1000 m, with Ftp (high load) and
Email (high load) traffic in the form of network load,
throughput and delay values, are shown in table 2. All
values are satisfactory for MANET network, especially
for OLSR routing protocol in use.

4.2. Video conferencing application


Network with constant Video conferencing (low
resolution video) traffic is analyzed based on the results
of network load, throughput and delay in two simulated
scenarios.
Table 3. Simulation results for Video conferencing in an
area of 1000 m x 1000 m

Figure 1. Network load and Throughput for Ftp in an area


of 1000 m x 1000 m

Routing
Throughput Network
Application
protocol
(bit/s)
Load (bit/s)
Video
AODV
7462940
8478321
Conferencing
Video
OLSR
6303967
7283091
Conferencing

Figure 1 shows the value of network load and throughput


for AODV and OLSR routing protocols with constant Ftp
(high load) traffic in the network. From the simulation
results it is clear that throughput values is higher than the
values for network load in both cases, with AODV and
OLSR routing protocols. Values of throughput with
OLSR routing protocol in use are better, which is shown
in table 2.

Delay (s)
0,185
0,146

Based on the results of network load, throughput and


delay in the scenario with 19 mobile nodes in an area of
1000 m x 1000 m, network with constant Email high load
traffic is analysed.

Figure 3. Network load adn Throughput for Video


conferencing in an area of 1000 m x 1000 m
Figure 3 shows the value of network load and throughput
for AODV and OLSR routing protocols with constant
Video conferencing (low resolution video) traffic in the
network with 19 mobile nodes randomly placed in the
area of 1000 m x 1000 m. According to mean values
given in table 3, network load values are higher than
throughput values for AODV and OLSR routing
protocols.

Figure 2. Network load and Throughput for Email in an


area of 1000 m x 1000 m
Figure 2 shows the network load and throughput for 19
mobile nodes in the simulation area of 1000 m x 1000 m,
with Email (high load) traffic for AODV and OLSR
483

ANIMPLEMENTATIONOFMANETNETWORKSONCOMMANDPOSTDURINGMILITARYOPERATIONS

OTEH2016

Figure 6. Delay for Video conferencing in an area of 100


m x 100 m

Figure 4. Delay for Video conferencing in an area of


1000 m x 1000 m

Figure 6 shows the value of delay for Video conferencing


(low resolution video) traffic in the network with 19
mobile nodes randomly placed in the area of 100 m x 100
m. Values are higher than the values in the simulated area
of 1000 m x 1000 m.

Figure 4 shows the value of delay for constant Video


conferencing (low resolution video) traffic in the network
with 19 mobile nodes randomly placed in the area of 1000
m x 1000 m, with AODV and OLSR routing protocols.
Mean values of delay are represented in table 3.

Table 4. Simulation results for Video conferencing in an


area of 100 m x 100 m
Routing
Throughput Network
Application
protocol
(bit/s)
Load (bit/s)
Video
AODV
4378122
4009693
Conferencing
Video
OLSR
5201140
5082491
Conferencing

Delay (s)
0,860
0,551

The simulation results for Video conferencing shows that


in the first scenario with 19 mobile nodes randomly
placed in the area of 1000 m x 1000 m, values of
throughput are lower than the values of network load,
with AODV and OLSR routing protocols in use. This
means that in that network traffic needs of routing
protocols for discovering and maintaining routing paths,
exceeds useful traffic for data. With smaller area of 100 m
x 100 m, that relation is changed, so in the second
scenario, values of throughput are better than the values
of network load. But in that case delay increases, which
gives the negative impact on the network.

Figure 5. Network load adn Throughput for Video


conferencing in an area of 100 m x 100 m

5. CONCLUSION

Figure 5 shows the value of network load and throughput


for AODV and OLSR routing protocols with constant
Video conferencing (low resolution video) traffic in the
network with 19 mobile nodes randomly placed in the
area of 100 m x 100 m. Values in table 4 show that the
relation between throughput and network load is changed.
Obtained values of throughput in this scenario are better
than the values of network load.

In this paper, in order to consider using MANET instead


LAN network in command post during military operations,
two simulation scenarios were created. In both scenarios a
constant Ftp (high load), Email (high load) and Video
conferencing (low resolution video) traffic were generated in
the network. By analysing throughput, network load and
delay in MANET network, it is shown a different outcome
depending of used traffic during simulations. In case where
Ftp and Email traffic were used for simulation, results shows
that MANET network can be used to replace the LAN
484

ANIMPLEMENTATIONOFMANETNETWORKSONCOMMANDPOSTDURINGMILITARYOPERATIONS

network. In those cases network with an OLSR routing


protocol shows better characteristics than the network with
an AODV routing protocol. When Video conferencing (low
resolution video) was used for simulation, characteristics of
throughput and network load in case of network with 19
mobile nodes randomly placed in the area of 1000 m x 1000
m, are not appropriate for MANET. In the network placed in
the small area of 100 m x 100 m, the problem was in the high
value of delay, which again is not appropriate for MANET.

OTEH2016

can be modified.

References
[1] Narinderjeet Kaur, Maninder Singh, Effects of Caching
on the Performace of DSR Protocol, Journal of
Engineering, Volume 2, Issue 9, (September 2012), pp 0711.

For the networks with a lower value of traffic, MANET


networks can work properly, and it can replace existing LAN
networks. It refers to networks with Ftp, Email, or a similar
traffic in use. In that MANET network it is recommended to
use the OLSR routing protocol.
In networks with higher traffic in use, the MANET network
must be modified. In this paper simulation area is reduced,
and results are still not appropriate.
In future work it should be examined use of higher power of
transponders, and also data rate in wireless communication

485

[2] Gagangeet Singh Aujla, Sandeep Singh Kang,


Comprehensive Evaluation of AODV, DSR, GRP,
OLSR and TORA Routing Protocols with varying
number of nodes and traffic applications over
MANETs, Journal of Computer Engineering, Volume
9, Issue 3 (Mar. - Apr. 2013), pp 54-61.
[3] Tepsi, D., Veinovi, M., ivkovi, D., Ili, N., A
Novel Proactive Routing Protocol in Mobile Ad Hoc
Networks, Ad Hoc & Sensor Wireless Networks,
Volume 27 (2015), pp 239-261.
[4] Riverbed Modeler Academic Edition 17.5, [Online]
http://www.riverbed.com, Visited 05.07.2016.

PRACTICAL IMPLEMENTATION OF DIGITAL DOWN CONVERSION


FOR WIDEBAND DIRECTION FINDER ON FPGA
VUK OBRADOVI
UNO-LUX NS d.o.o, vuk.obrad.91@gmail.com
PREDRAG OKILJEVI
Military Technical Institute, Belgrade, predrag.okiljevic@mod.gov.rs
NADICA KOZI
Military Technical Institute, Belgrade, nadica.kozic@gmail.com
DEJAN IVKOVI
Military Technical Institute, Belgrade, dejan.ivkovic@vti.vs.rs

Abstract: Modern direction finders are designed as sensors for interception of radio signals in wider instantaneous
bandwidth. Software radio technology offers ability to develop wideband direction finder architectures with
programmable intermediate frequency, instantaneous bandwidth and frequency resolution. In this paper
implementation of synchronous digital down conversions (DDCs) for wideband direction finder on FPGA is presented.
DDCs are used to downconvert signal from intermediate frequency (IF) to baseband. For wider bandwidth DDCs need
to be implemented on FPGA which combines the flexibility of a general-purpose DSP plus the speed, density, and low
cost of an application-specific integrated circuit (ASIC) implementation. Measurements and computation of input
signals from five channels in parallel on two FPGA devices is presented. Benefits of high speed parallel computing
have been observed and discussed in this work. System performance and measured results are briefly presented in
paper.
Keywords: FPGA, DDC, direction finder, RF, parallel computing
Digital HF/VHF/UHF Search Direction Finder
instantaneous bandwidth is 10MHz [2] or from 10 up to
40MHz in mrd5000 and mrd7000 family of Wideband DF
systems (WDF) [3].

1. INTRODUCTION
Determination of emitter positions (emitter geo-locations)
has various applications in both civil and defense oriented
fields. In the defense application, the determination of
emitter positions is very important in EW (Electronic
Warfare) systems and systems for gathering intelligence
data such as the COMINT (Communication Intelligence)
system. Electronic support (ES), as a part of EW, provides
near-real-time information which can be integrated into
the Electronic Order of Battle (EOB) for situational
awareness [1]. In order to determine emitter position,
some of the two-step techniques can be implemented in
modern direction finder (DF) system. Two-step
positioning techniques, or indirect methods, are based on
the estimation of a specified parameter such as the
direction of arrival (DOA) or the time of arrival (TOA) at
each sensor. The estimated parameters are sent to the
central sensor (Fusion Center) in order to determine the
emitter location. DOA estimation is usually studied as
part of the more general field of array processing. Many
papers in this field are focused on radio direction finding
that is estimating the direction of electromagnetic waves
impinging on one or more antennas. Modern DFs are
based on interception of radio signals in instantaneous
bandwidth that is considerably wider than bandwidth of
the signals. Instantaneous bandwidth at modern DF is
larger than 10MHz. For example, in R&SDDF0xA

In this work we present practical implementation of one


part of modern direction finder based on technology of
software defined radio using field programmable gate
array (FPGA). FPGA technology enable high-speed
processing in a compact footprint, while retaining the
flexibility and programmability of software radio
technology. FPGAs are popular for high-speed, computeintensive, reconfigurable applications (fast Fourier
transform (FFT), finite impulse response (FIR) and other
multiply-accumulate operations) [4].
In this paper, we present practical implementation of
synchronous digital down conversions (DDCs), from
intermediate frequency (IF) to baseband, in WDF with
20MHz instantaneous bandwidth. This DDC is
implemented in NI PXIe 7975R module ( containing
DSP-focused Xilinx Kintex-7 FPGA device) extended
with NI 5734 adapter module for 4 channel simultaneous
120MHz A/D conversion. This work is a part of research
and development of wideband direction finder system for
VHF/UHF band in Military Technical Institute from
Belgrade, Republic of Serbia.
NI PXIe 7975R [5] is FPGA module for National
486

PRACTICALIMPLEMENTATIONOFDIGITALDOWNCONVERSIONFORWIDEBANDDIRECTIONFINDERONFPGA

Instruments PXI platform[6]. PXI platform consists of


chassis containing controller and modules.Some of the
features of NI PXI Chassis are high-speed lines for direct
memory access (DMA) of real-time data (up to 8 GB/s)
and user configurable high precision trigger and clock
lines. In this application two NI PXIe 7975R modules
with NI 5734 A/D converter modules [7] in same PXI
chassis are used for signal acquisition and processing.

OTEH2016

than the number of antenna elements in DF antenna array.


The measurement of DOA is available only after the end
of a sequence that involving RF switching, either on the
antennas or after phase and/or amplitude weighting of
signal from DF antenna array. The basic advantage of
sequential signal acquisition is reduced receiver hardware,
which implies reduced complexity, cost, volume and
weight. However, there is a time-accuracy tradeoff due to
less data being gathered, in compared with a parallel
signal acquisition. In the sequential signal acquisition,
total time for signal acquisition depends of the number of
antenna subsets.

This paper consists of five parts. Brief introduction is


given in Section I, concept of modern DF in Section II,
architecture of DDC in Section III, some results of its
practical implementations on FPGA are presented in
Section IV, and, finally, results and conclusions are given
in Section V.

Modern WDF has to provide very high probability of


interception of signals. To fulfill this requirement, it is
necessary to increase the instantaneous bandwidth or to
decrease time for the data processing (which
automatically lead to increasing processing speed and
processing power). In order to provide wider
instantaneous bandwidth, at each acquisition channel is
possible to form more than one DDC. Using this
approach, it is possible to provide instantaneous
bandwidth of WDF that corresponding to the number of
DDCs and its output sample rate.

2. CONCEPT OF MODERN DIRECTION


FINDER
Architecture of a modern WDF consists of DFs antenna
array, acquisition channels and digital signal processing
block. The basic architecture of modern WDF is
presented in Picture 1 [2].
The main role of the antenna array is to perform spatial
sampling of the signal of interest (from electromagnetic
environment) and to convert spatial electromagnetic wave
to guided electric wave (electric current). Acquisition
channel include RF frontend with coherent tuners that
convert RF signal to appropriate IF, A/D convertors and
digital down convertors (DDC), which convert signal to
the baseband. RF frontend consist of antenna multiplexer
matrix, low-noise amplifiers and band pass filters.
Usually the signal acquisition in DF may be performed
using parallel or sequential techniques. If it is parallel
signal acquisition, the measurement of DOA is nearly
instantaneous and implements as many signal acquisition
channels as signals generated from antenna elements in
DF antenna array. In the case of sequential signal
acquisition, the number of acquisition channels is lower

The main purpose of the block for digital signal


processing is to estimate DOA for all frequency bins in
the selected instantaneous bandwidth. In WDF, DOA
estimation is performed similar to DOA estimation of
wideband signals. Techniques for wideband DOA
estimation can be divided into two main groups: coherent
and non-coherent [8], according to how information from
the covariance matrices is used. The idea of trivial noncoherent DOA estimation of wideband signals is based on
non-coherent wideband processing. The non-coherent
approach processes each frequency bin independently and
average the DOA estimates over the all bins. Since each
decomposed signal is approximated as a narrowband
signal, any narrowband DOA estimation method is
applicable.

Picture 1. Basic architecture of wideband direction finder


487

PRACTICALIMPLEMENTATIONOFDIGITALDOWNCONVERSIONFORWIDEBANDDIRECTIONFINDERONFPGA

OTEH2016

In real-world applications, especially in military systems,


non-coherent focusing is most commonly used for the
following reasons [8]: no a priori information is required,
almost all of the signals are already separated in spectral
terms and even with densely occupied scenarios, the
signals can be separated in most cases with simple singlewave algorithms and decides the DOA, frequency ranges
occupied by the emitters can be estimated.
Usually block for digital signal processing consists of
digital filter bank that converting the signal from time
domain into the frequency domain (i.e. its frequency
spectrum) and part for estimation of DOA. After digital
down-conversion to the baseband, the real and imaginary
parts of the signal in each measurement path are fed to a
digital filter bank, which is conventionally implemented
as a Discrete Fourier Transform (DFT) or polyphase filter
bank (PFB). During the next processing step, a quantity of
samples determined based on the selected averaging time
is collected and fed to the DOA algorithms. This part of
the processing typically involves the use of field
programmable gate arrays (FPGAs) or digital signal
processor (DSP), because of the necessary high
processing speed of the large data and the tight coupling
with hardware. The necessary processing power primarily
depends from the chosen methods for DOA estimation
(Watson-Watt,
correlative
interferometers,
highresolution methods, ...).
Picture 2. Typical DDC architecture

Last part of modern DF system is block for interaction


with operator (user) which perform visualization of
results from digital signal processing block. This block is
software implemented only, commonly referred as GUI
(Graphical User Interface).

NCO represent Sine/Cosine signal generator with phase


offset and frequency tuning inputs. A positive tuning
frequency is used to downconvert the signal, but negative
tuning frequency can be used to upconvert the negative
(spectrally flipped) image of the desired signal. In the
case for demand of full coherent acquisition channels
(when DF systems use DOA algorithms with phase
estimation), NCO of every DDCs in all channels must be
mutually synchronized (in time and frequency). The
complexity of the NCO will depend on the final
frequency setting accuracy required and on the spuriousfree dynamic range (SFDR) of the system.

3. ARCHITECTURE OF DDC
In telecommunication and SIGINT systems, DDC
performs digital mixing (down conversion) of the input
signal, narrow band low-pass filtering with decimation
and gain adjustment of the digital input stream. It is an
essential component of any software radio-based system,
which enable simplification of RF front-end design,
including local oscillators and mixer design, as the
downconversion process is performed in digital domain.
Digital filters following the digital mixers provide a much
sharper filtering than traditional analog filtering. These
filters are usually decimating in nature, thereby reducing
the output data rate.
Each DDC typically contains an I/Q splitter that is based on
a numerical controlled oscillator (NCO) that modulates the
input signal that comes from the RF section with sine and
cosine waves digital signal generator (mixer to quadrature
downconvert the signal to baseband),followed by a multistage cascade-integrate comb (CIC) filter (also called
Hogenauer filter),and two stages of decimate-by-two
filtering to isolate the desired signal: Compensation FIR
(CFIR)filter, and a Programmable FIR (PFIR) filter. Simple
block-diagram of one DDC is shown on Picture 2 [9].

488

CIC filter is useful to realize large sample rate changes in


digital systems without the need for multipliers and using
a very compact architecture (mixer outputs are decimated
by large integer factor). It reduces the input signal sample
rate by a programmable factor. CIC filter is built using
two basic blocks: an integrator and a comb. An integrator
is a single pole IIR filter which essentially has a low pass
filter characteristic. A comb is a FIR filter. A typical CIC
filter is built as a cascade of multiple integrator and comb
sections and a rate changer between the two sections
(typical number of this multi-stages is from 3 to 6). This
filter can achieve high decimation or interpolation rates
without using any multipliers, which makes them very
useful for digital systems operating at high rates. The CIC
outputs are followed by a coarse gain stage and then
followed by two stages of decimate-by-2 filtering. The
coarse gain circuit allows the user to boost the gain of
weak signals after the input bandwidth of the
downconverter has been reduced by the CIC filter.

PRACTICALIMPLEMENTATIONOFDIGITALDOWNCONVERSIONFORWIDEBANDDIRECTIONFINDERONFPGA

The CFIR filter represent first stage decimate-by-two, and


it is used to flatten the passband frequency response

OTEH2016

(adjusts for roll-off of the CIC passband), and its typical


specifications is shown on Picture 3 [10].

Picture 3. CFIR Specifications


[10].A typical use of the PFIR is to perform matched
Low-pass PFIR filter is second decimate-by-two stage
(root-raised cosine) filtering. Outputs from PFIR are
and it is used to lower the magnitude of the ripples of the
complex IQ signals in baseband.
CIC filter and customizationof the channels spectral
response. Its typical specifications are shown on Picture 4

Picture 4. PFIR Specifications


programmable bandwidth (or decimation) and tuning
The CIC filter, Compensation FIR, and Programmable
frequency. However, they are usually targeted for
FIR blocks are used together in DDC to achieve high
narrower band applications. There is increasing demand
decimation ratio, aliasing attenuation and applicationfor higher bandwidth, and system designers are trying to
specific filtering. Designing decimation filters so that
design wideband systems with bandwidths up to 40MHz.
their cascade response meets a given set of passband and
These include WDF, radar, GPS, telemetry, wideband
stopband attenuation and frequency specifications can be
communications, etc. For larger bandwidths, the DDCs
a cumbersome process where we must to choose the
need to be implemented in a FPGA following the A/D
correct combination of passband and stopband
converter.
frequencies for each filter stage. Choosing stopband
frequencies properly ensures lower order filter designs In the case that required instantaneous bandwidth of the
(stopband attenuation and ripple are controlled by the WDF is 20MHz, it can be achieved by combining four
order of the filters). In design process, there is need for DDCs at each acquisition channels with parameters:
relaxes the stopband frequency as much as possible to
Input sample rate: 120MHz;
obtain the lowest filter orders at the cost of allowing some
CIC decimation factor: 5;
aliasing energy in the transition band of the cascade
CFIR input sample rate:24MHz;
response. This design tradeoff is convenient in case of
CFIR decimation factor (fixed): 2;
priority to minimize filter orders (to realize it on
PFIR input sample rate:12MHz;
dedicated hardware platform with limited resources like
PFIR decimation factor (fixed): 2;
FPGAs od DSP).
PFIR output sample rate:6MHz.
A variety of dedicated DDCs are available, Analog Magnitude responses of CIC, CFIR, PFIR filters and their
devices, TI-Graychip (Texas Instruments) and Intersil simultaneous cascade connection, are shown on Pictures
being the most popular ones today. These DDCs offer 5-8, respectively.
489

PRACTICALIMPLEMENTATIONOFDIGITALDOWNCONVERSIONFORWIDEBANDDIRECTIONFINDERONFPGA

Picture 5. Magnitude response of CIC

Picture 6. Magnitude response of CFIR

Picture 7. Magnitude response of PFIR

490

OTEH2016

PRACTICALIMPLEMENTATIONOFDIGITALDOWNCONVERSIONFORWIDEBANDDIRECTIONFINDERONFPGA

OTEH2016

Picture 8. Magnitude responses of cascaded CIC, CFIR and PFIR filters


signals are then multiplied with input sample in mixer
block to obtain complex signal (in phase and quadrature
4. IMPLEMENTATION ON FPGA
phase component signal -I/Q). There are 4 NCO blocks
Digital down conversion algorithm was implemented used for down-converting into 4 different sub-bands. Each
using high level programming tools that enabled optimal input signal is mixed with each of 4 NCO signals which
programming of Kintex-7 FPGA chip on NI PXIe-7975R produces 20 complex signals.
device. Xilinx IP cores were used for integration of DDC Each complex signal is processed in individual DDC
chain elements such as NCO, CIC, CFIR and PFIR filters chain. Each DDC chain is implemented as individual
and LabVIEW FPGA module was used for A/D parallel processing hardware block in FPGA circuit after
acquisition, synchronization between 5 channels data code compilation. First block in DDC chain is CIC filter
transfer, real-time reconfiguration of IP cores.
implemented as CIC Xilinx IP core block with variable
decimation factor. For instantaneous WDF bandwidth of
the 20MHz CIC decimation factor is set to 5. Decimation
factor can be increased to smaller instantaneous
bandwidths. For CIC filter output signal is significantly
attenuated after decimation therefore coarse gain is
applied by multiplying complex CIC output signal with
gain value. For CFIR and PFIR filter implementation FIR
Xilinx IP core block was used. Core was configured to
implement filter coefficients previously calculated during
digital filter design.

LabVIEW FPGA module is graphical programming


environment which enables block based FPGA
programming. Each block executes arithmetical or logical
function. Blocks can be complex structures containing
several smaller blocks. Each block has input signals,
output signals, synchronization signals for input and
output and parameters. Because processing of some
complex blocks can take several sample clock cycles
synchronization inputs and outputs are used to signal
availability of valid block output. Inputs and outputs can
be connected between processing blocks directly or
onboard FIFO buffers can be used to transfer data
between blocks.

Signals at output of each DDC chain are transferred to


polyphase bank / DFT processing blocks using local
FPGA FIFO buffers. FPGA implementation ensures
determinism in signal processing due to hardware
implementation of processing blocks. Each of 20 DDC
complex signal outputs are always sent to FIFO at same
clock tick with constant rate of 6MHz.

Xilinx IP cores provide optimized and tested DSP blocks.


These blocks were configured using parameters obtained
in digital filter design of CIC, CFIR and PFIR filters.
Processing was implemented on two NI PXIe-7975R
modules in parallel. Three channels are processed on 1st
and 2 channels on 2nd module. Acquisition and
processing of all 5 channels needed to be synchronous
because of implemented DF algorithms. Signal sampling
and digital processing were driven by 120MHz sample
clock. NI PXI chassis provides 10 MHz reference clock
which was used for phase locking (PLL) 120 MHz
module sample clocks. Chassis trigger line was used to
share start signal for both modules.

In FPGA implementation of digital signal processing


device resource utilization is of great importance. Device
resource consumption on Xilinx Kintex 7 FPGA is
presented for implementation of 12 DDC processing
blocks at one I PXIe-7975R module:
Total slices: 58.3% (37066 out of 63550)
Slice Registers: 26,6% (135478 out of 508400)
Slice LUTs: 27,3% (69499 out of 254200)
Block RAMs: 25,2% (200 out of 795)
DSP48s: 67,1% (1034 out of 1540)

After PLL is done and start trigger occurs each in each


clock sample analog input is acquired on all 5 channels as
16bit value. Direct digital synthesizer (DDS) Xilinx core
IP is used to generate NCO signal. NCO sine and cosine

Functionality and performance of synchronous data


processing was verified using CW signal (Picture 9) and
AM modulated (Pictures 10 and 11) test RF signal with 491

PRACTICALIMPLEMENTATIONOFDIGITALDOWNCONVERSIONFORWIDEBANDDIRECTIONFINDERONFPGA

30dBm power level and 30MHz frequency. Test signal is


divided into two signal each on one channel of two
different NI 7975R devices in the same NI PXI chassis.
Outputs of corresponding DDC have been compared.

OTEH2016

Amplitude level difference between two DDC outputs


were less than 3% and phase difference were less than 1
for both test signals.

Picture 9. FPGA implemented DDC output - single tone input

Picture 10. FPGA implemented DDC output -AM modulated input

Picture 11. FPGA implemented DDC output -AM modulated input (enlarged)
492

PRACTICALIMPLEMENTATIONOFDIGITALDOWNCONVERSIONFORWIDEBANDDIRECTIONFINDERONFPGA

[1] https://www.rohde-schwarz.com/us/product/ddf0xaproductstartpage_63493-9481.html
[2] http://www.gew.co.za/electronicwarfare/products/mrd5000-and-mrd7000-family/
[3] Angsuman,R.: FPGA-based applications for software
radio.RF Design Magazine (2004): 24-35.
[4] http://sine.ni.com/nips/cds/view/p/lang/en/nid/209714
[5] http://sine.ni.com/nips/cds/view/p/lang/en/nid/211794
[6] http://www.ni.com/pxi/
[7] Tuncer,T.E., Friedlander,B.: Classical and modern
direction-of-arrival estimation, Academic Press,
USA, 2009.
[8] Okiljevic,P., Pokrajac,I., Jelusic,D.: OPTIMIZING
CIC AND FIR FILTER'S COEFFICIENTS IN
GC4016 DDC CHAIN, 20. Telecommunications
Forum (TELFOR 2012), Print ISBN 978-1-46732983-5, Publisher IEEE, 20-22.11.2012., Belgrade,
page 756 759.
[9] Texas Instruments: GC4016 MULTI-STANDARD
QUAD DDC CHIP, Data Manual, August 2001
Revised July 2009

5. CONCLUSIONS
Modern direction finders are based on interception of
radio signals in wider instantaneous bandwidth in order to
achieve high probability of interception of signal of
interest. Increasing the instantenous bandwidth of DF is
possible to improve probability of interception of signal
of interest. In this paper, we present an approach of
increasing the instantenous bandwidth using combination
of four DDC per each channel of DF. DDC are
implemented in FPGA. FPGAs are becoming an integral
part of DF design. FPGA technology enable high-speed
processing in a compact footprint, while retaining the
flexibility and programmability of software radio
technology. Presented solution could be used in software
defined direction finder.

REFERENCES
I.

OTEH2016

Pokrajac,P., Okiljevic,P., Vui,D.: FUSION OF


MULTIPLE
ESTIMATION
OF
EMITTER
POSITION, Scientific Technical Review, ISSN
0350-0667 Vol. XLIX, No. 3-4, 2012, Belgrade,
page 4 11.

493

STATISTICS OF RATIO OF TWO WEIBULL RANDOM VARIABLES


WITH DIFFERENT PARAMETERS
IVICA MARJANOVI
Republic of Serbia Ministry of Defence, Belgrade, ivicabeograd@gmail.com
DEJAN RANI
Faculty of Electronic Engineering, University of Ni, Ni, dejan.rancic@elfak.ni.ac.rs
DANIJELA ALEKSI
College of Applied Technical Sciences, University of Ni, Ni, danijela.aleksic@vtsnis.edu.rs
DEJAN MILI
Faculty of Electronic Engineering, University of Ni, Ni, dejan.mailic@elfak.ni.ac.rs
MIHAJLO STEFANOVI
Faculty of Electronic Engineering, University of Ni, Ni, misa.profesor@gmail.com

Abstract: In this paper the ratio of two Weibull random variables with different parameters is considered. Probability
density function and cumulative distribution function of proposed ratio are evaluated. Derived expression for probability
density function can be used for evaluation bit error probability and the expression for cumulative distribution function can
be used for calculation outage probability of wireless communication radio system operating over Weibull short term fading
channel in the presence of co-channel interference subjected to Weibull fading. The influence of Weibull non linearity
parameters and average powers of Weibull fading on outage probability is analysed.
Keywords: Weibull fading, random variable, probability density function (PDF), cumulative distribution function (CDF),
level crossing rate (LCR).
small scale fading. Also, the ratio of product of two Weibull
random variables and Weibull random variable is
considered. For this the ratio probability density function
can be used for calculation symbol error probability,
cumulative distribution function for evaluation outage
probability and level crossing rate for evaluation average
fade duration of wireless relay communication mobile radio
system with two sections operating over Weibull multipath
fading channel in the presence of co-channel interference
subjected to Weibull short term fading [3-5].

1. INTRODUCTION
In this paper the ratio of two Weibull random variables is
considered. Weibull random variables have different non
linearity parameters and average powers. Weibull
distribution can be used to describe small scale signal
envelope variation in non linear non homogenous fading
environments. Weibull distribution has parameter of non
linearity. For parameter =2, Weibull distribution reduces to
Rayleigh distribution and when parameters goes to infinity
Weibull short term channel becomes no fading channel [12]. Statistics of the ratio of two Weibull distribution can be
used in performance analysis of wireless communication
mobile system operating over Weibull short term fading
channel in the presence of co-channel interference subjected
to Weibull multipath fading. Probability density function of
ratio of two Weibull random variables can be used for
evaluation average symbol error probability and cumulative
distribution function of ratio of two Weibull random
variables can be used for calculation outage probability of
wireless communication system in the presence Weibull
short term fading and Weibull co-channel interference. In
this paper, level crossing rate of ratio of two Weibull
random process is evaluated in this expression can be used
for calculation average fade duration of wireless
communication system in the presence of Weibull short
term fading and co-channel interference affected to Weibull

Ratios and products of random variables have applications


in performance analysis of wireless mobile radio system.
There are more works in open technical literature
considering statistical characteristics of ratios and products
of random process.
In paper [6], statistics of ratio of product of two random
variables and random variables is considered and probability
density function and cumulative distribution function of
ratio of random variable and product of two random
variables are calculated in work [7]. In work [8], level
crossing rate of product of N Rayleigh random processes is
calculation and this result is used for calculation average
fade duration of wireless relay communication system with
n sections in the presence Rayleigh multipath fading. Level
crossing rate of product of two Nakagami-m random
variables is analysed in paper [9], and average fade duration
494

OTEH2016

STATISTICSOFRATIOOFTWOWEIBULLRANDOMVARIABLESWITHDIFFERENTPARAMETERS

for wireless relay system with two sections in the presence


Nakagami-m fading is evaluated.

pe1 = erfc ( z 2 ) pz ( z ) dz.

In this paper statistics of ratio of two Weibull random


variables and statistics of ratio of product of two Weibull
random variables and Weibull random variable are analysed
and calculated. To the best authors knowledge statistics of
ratios and products Weibull random variables and Weibull
random processes are not reported in open technical
literature. Derived expressions in this paper can be used in
performance
analysis
and
designing
wireless
communication system in the presence of Weibull fading.

and for non coherent modulation schemes average bit error


probability can be calculated to formulas [10]:

Cumulative distribution function of ratio of two Weibull


random variables is [10]:
z

Fz ( z ) = dypz ( y ) dt =
0

z
1
2 t

dtt e

(1)

where x and y fallow Weibull distribution.

1 x1

1 1 1 1
x
e
, x 0.
1
1 y 2

p y ( y ) = 2 x 2 1 e 2 , y 0.
2
px ( x ) =

dyy
0

z
1
2 t

1 1

1 2 2

dtt

(3)

dtt 2

z= x=
y

1 ( + )
1 2

1 2 1 1 1 2

z
1 2
2 2

dtt 2

1
+1 1
2
2

dtt

1
2

z
1

1 2

e
1 2

1 1
1 z1 2 t 2
2
1

. (9)

1 1
t 1 z1 2 2 t 2

z1

(4)

(1 + 2 1)

pz ( z ) = 1 2 z1 1 1 2 2 2 2

1 2
2
1 1+ 1 ( + 1)
1 2

1 1

t 1 2 2 t 2

1
= 1 dte

The ratio of two Weibull random variables is:

After substituting previously expression becomes:

(8)

1
dy = 2 2 1 t 2 dt.

3. LEVEL CROSSING RATE OF RATIO OF


TWO WEIBULL RANDOM VARIABLES
1

1 1
1 y1 2 2 t 2

After substituting the expression for Fz(z) becomes:

Introducing the substitute.


y = 2 2 t 2 ,

z1

1 y 2 = t ,

1 1

1 1

1 1

t 1 2 2 t 2
t
dt e e 1

Fz ( z ) = 1 2

1 1 y1 1 y 2

dyy

pz ( z ) = dy y px ( zy ) p y ( y ) =

1
2

= 1 2 z1 1 dy y1 + 2 1 e 1
1 2

1 1
1 y1 2 2 t 2

1 1

where 1 is non linearity parameter for Weibull fading x, 1


average power for Weibull fading x, 2 is non linearity
parameter for Weibull fading y and 2 average power for
Weibull fading y. Probability density function of ratio of
two Weibull random variables is:

1 1

1 2 1 2 (1 + 2 )

1 2 2 2

1 y1 2 t 2 = z , 1 y1 1 2 t 2 dy = dz,
2
2

(2)

(7)

The ratio of two Weibull random variables x and y is:


x = z y.

pe2 = e z pz ( z ) dz.

2. PDF AND CDF OF RATIO OF TWO


WEIBULL RANDOM VARIABLES WITH
DIFFERENT PARAMETERS
z = x,
y

(6)

x1 = z 2 y1 2 .

(10)

y1

where x1 and y1 are Rayleigh random variables and fallow


distributions:

(5)

1 1
t 1 z1 2 2 t 2

x11

px1 ( x1 ) =

p y1 ( y1 ) =

This derived expression for probability density function of ratio


of two Weibull random variables can be used for calculated
symbol error probability. For coherent modulation schemes, the
expression for bit error probability has form [10]:

2 x1

2 y1

The first derivative of z is:

495

e
e

x2
1

y2
1

,
,

x1 0.
y1 0.

(11)

OTEH2016

STATISTICSOFRATIOOFTWOWEIBULLRANDOMVARIABLESWITHDIFFERENTPARAMETERS
2

Level crossing rate of z random process can be calculated as


average value of the first derivative of z random process.

1
1
x1
z = 12 2 x11 x1 1 4 2 y1 2 .
2

y1 2

y1 2

1 y1 2

(12)

2 x1

x11 x1

2
1

2 1

x1 =

2 +1

N z = dz zpzz ( zz ) =

y1 .

z =

1 y1

2 x1

2 +1

2 y1

x11 x21 +

(13)

4. LEVEL CROSSING RATE OF THE RATIO


OF PRODUCT OF TWO WEIBULL
RANDOM PROCESSES AND WEIBULL
RANDOM PROCESS

4 +2
2 2
y
2 1

x11 y21 .

(14)

The ratio of product of two Weibull random variables and


Weibull random variable is:
2

4
41 1 2 41 x1 2

+
= fm
x
.
4 1
4 +2
2 2
2 y2

2 y1
1 1

pzzy1 ( zzy1 ) = pz ( z / zy1 ) pzy1 ( zy1 ) =


= pz ( z / zy1 ) p y1 ( y1 ) pz ( z / y1 ) .

The: pz ( z / y1 ) =

1
2

dx1
px1 z 2 y1

dz

(16)

1
2
y1 2

1 1
px1 z 2 y1 2

1
1 1
dy1 y1 2 px1 z 2 y1 2

0
p y1 ( y1 ) pz ( z / zy1 ) .
1

z2

z1 =

x11 y1 2
2

. (20)

p y1 ( y1 ) =

2 y1

pz1 ( z1 ) =

2 z1

x2
1

y2
1

z2
1

x1 0.

y1 0.

(21)

z1 0.

z13

2 1

z13

(22)

x11 y1 2 2 z1 3 z1 .

(17)

The first derivative of Rayleigh random variable has


Gaussian distribution. This, x1 , y1 and z1 fallow Gaussian
distribution. Linear conditional of x1 , y1 and z1 have
Gaussian distribution. Therefore w have conditional
Gaussian distribution. Mean of z is zero since x1 =0, y1 =0
and z1 =0. The variance of w is:

Joint probability density function of z and z is:

px1 ( x1 ) =

2 x1

p y1 ( y1 ) pz ( z / zy1 ) .

pzz ( zz ) =

z1

1
1
y2
x1
w = 12 2 x11 x1 + 12 2 y1 2 y1

After substituting the previously expression becomes:


pzzy1 ( zzy1 ) =

x1 y 2
= 1 1 ,
w

The first derivative of w is:

dx 1 1
where: 1 = 1 z 2 y1 2 .
2
dz

where x1, y1 and z1 are Rayleigh random process:

(15)

x y x11 y1 2
,
=
w=
2
z
3
z1

Joint probability density function of z, z and y1 is:

y2

1
2y 1
2 z 2 y1 2 1 e 2 .

where fm is maximal Doppler frequency. After substituting,


the expression for variance becomes:
2

(19)

where: x21 = 2 fm 2 1 , y21 = 2 fm 2 2 .

2
z

4 2

4
2 2
y
1 1

1
fm 21 1
41 2 1 2 4 2 21 2
= 1
+ 2 z y1
z
dy1
z
y
1
12
2
2 2
0

The variance of z is:


4

dy1 y1 2

since x1 = y1 = 0.

z2 =

y1 .

p y1 ( y1 ) dz z pz ( z / zy1 ) =

0
1
1
1 1

= 1z2
dy1 y1 2 px1 z 2 y1 2 p y1 ( y1 ) z =

2
2

x11 x1

1 1
px1 z 2 y1 2

The first derivative of Rayleigh random variable has


Gaussian distribution. This, random variables x1 and y1
follow Gaussian distribution. Linear can be intone Gaussian
random variables has Gaussian distribution. Therefore,
random variable z has conditional Gaussian distribution.
Mean of z is:
1

2 y1 2

2 1

(18)

496

STATISTICSOFRATIOOFTWOWEIBULLRANDOMVARIABLESWITHDIFFERENTPARAMETERS

w2 = y1 2 42 x11
1
3

w2

3
3
x 1 y 2
1 1

2
y1

powers of Weibull random variables and nonlinearity


parameters is shown. Cumulative distribution function
increases as signal envelope increases. For higher values of
nonlinearity parameters cumulative distribution function is
lower and system performance is better. The influence of
envelope on outage probability is higher for lower values of
nonlinearity power. The influence of signal envelope on
cumulative distribution function is the highest for moderate
values of signal envelope.

3
4
4 2

2 4
2
2

x
y
+

x
1
1
1

22

(23)
4
3
3

4
4
2

+ + x11 42 y1 2 w z21
3
3
3
x 1 y 2
1 1

4 2

w2
3 3
x 1 y 2
1 1

OTEH2016

where x21 = 2 fm 2 1 , y21 = 2 fm 2 2 , z21 = 2 fm 2 3 .


2
2
z2 = 42 w2 2 fm 2 1 + 42 w2 2 fm 2 2 +
1 x1
2 y1
23

2 3

+ 42 w2 3 x11 y1 2 2 fm 2 3 =

23 23

x
y 2

= 4 2 fm 2 w2 2 1 2 + 2 2 2 + 3 1 1
w3
1 x1 2 y2

(24)

Joint probability density function of w, w , x1 and y1 is:


pww x1 y1 ( ww x1 y1 ) = pw ( w / wx1 y1 ) pwx1 y1 ( wx1 y1 ) =
(25)
= pw ( w / wx1 y1 ) px1 ( x1 ) p y1 ( y1 ) pw ( w / x1 y1 ) .
3 3
x 1y 2
dz1
pz1 1 1
The pw ( w / x1 y1 ) =
3
dw
w2

Figure 1. Cumulative distribution function of the ratio of


product two Weibull random variables and Weibull
random variable

3 1

dz
where: 1 = x11 y1 2 3 w 2 .
dw
2

After substituting the expression for pww ( ww ) becomes:

pww ( ww ) = dx1 dy1 pw ( w / wx1 y1 ) px1 ( x1 ) p y1 ( y1 )


0

3
1

3
2

x1 y1

3
2


3 1 x1 1 y1 2
w 2

3
w2

(26)

Level crossing rate of random process w is:

Figure 2. Cumulative distribution function of the ratio of


product two Weibull random variables and Weibull
random variable.

  pww ( ww ) = dx1 dy1 px1 ( x1 ) p y1 ( y1 )


N w = d ww
0

3
1

3
2

x1 y1

3
1

3
2

1
x
y

3
w 2 pz1 1 1
3
2
w2

w
.

At figure 2, also cumulative distribution function of the ratio


of product of two Weibull random variables and Weibull
random variable is shown. Cumulative distribution function
is outage probability of approximated wireless
communication system. Outage probability increases as
signal envelope increases. When average power 1
increases cumulative distribution function decreases and
outage probability is better. Also, when average power 2
increases system performance is better. The influence of
signal envelope on outage probability decreases for higher
values of average powers 1 and 2.

(27)

5. NUMERICAL RESULTS
At figure 1, cumulative distribution function of the ratio of
product two Weibull random variables and Weibull random
variable versus envelope for several values of average
497

STATISTICSOFRATIOOFTWOWEIBULLRANDOMVARIABLESWITHDIFFERENTPARAMETERS

OTEH2016

Level crossing rate of the ratio of product of two Weibull


random variables and Weibull random variable versus signal
envelope for several values of average powers and
nonlinearity is shown at figure 3. Level crossing rate
increases for lower values of signal envelope and level
crossing rate decreases for higher values of signal envelope.
The influence nonlinearity parameter on level crossing rate
is higher for lower values at nonlinearity parameter. Also,
the influence of signal envelope on level crossing rate is
lower when nonlinearity parameter increases. Level crossing
rate decreases when nonlinearity parameter increases. The
system performance is better for lower values of level
crossing rate.

cumulative distribution function is higher when nonlinearity


parameter decreases.

Figure 3. Level crossing rate of the ratio of product two


Weibull random variables and Weibull random variable

Figure 5. Level crossing rate of the ratio of two Weibull


random variables

Level crossing rate of ratio of two Weibull random variables


in function of the ratio of two Weibull random variables is
shown in figure 5, for several values nonlinearity parameter.
For this figure is 1=2=1. Level crossing rate increases for
lower values of signal amplitude and level crossing rate
decreases for higher values of signal amplitude. The
influence of signal envelope on level crossing rate is lower
for higher values of signal amplitude. Also, the influence of
signal amplitude on level crossing rate is lower for higher
values of nonlinearity.

5. CONCLUSION
In this paper, the ratio of two Weibull random variables with
different parameters is considered. Probability density
function, cumulative distribution function and level crossing
rate are evaluated as expressions with one-fold integral. The
expression for probability density function can be used for
calculation bit error probability, expression for cumulative
distribution function can be used for calculated outage
probability and average level crossing rate can be used for
calculation average fade duration wireless communication
system operating over Weibull short term fading
environment in the presence co-channel interference
subjected to Weibull short term fading. Also, the ratio of
product of two Weibull random variables and Weibull
random variable is studied and level crossing rate of
proposed ratio is evaluated as expression with two-fold
integrals. Derived expression can be used for evaluation
average fade duration of wireless mobile relay radio system
with two sections operating over non identical Weibull
small scale fading channel in the presence of co-channel
interference affected to Weibull short term fading. Obtained
expression for level crossing rate can be used for evaluation
level crossing rate of the ratio of product of two Rayleigh
random processes and Rayleigh random process as and level
crossing rate of the ratio of product of two Weibull random
processes and Rayleigh random process. When parameter
goes to infinity, Weibull random process becomes no
random process. The influence of Weibull non linearity

Figure 4. Cumulative distribution function of the ratio of


two Weibull random variables
At figure 4, cumulative distribution function of the ratio of
two Weibull random variables in terms of signal envelope
for several values of average powers 1 and 2 and
nonlinearity parameter is shown. The cumulative
distribution function increases when the ratio of two
Weibull random variable increases. Cumulative distribution
function has lower values when nonlinearity parameter
increases. Outage probability is better for higher values of
nonlinearity parameter. The influence of signal envelope on
498

STATISTICSOFRATIOOFTWOWEIBULLRANDOMVARIABLESWITHDIFFERENTPARAMETERS

parameters on statistics of ratio of Weibull random variables


is analysed and discussed. Outage probability decreases the
system performance is better when non linearity parameters
increase.

OTEH2016

relaying communications over fading channels",


Przeglad Elektrotechniczny, vol. 88, nr . 7A, pp. 133137, 2012.
[7] E. Mekic, M. Stefanovic, P. Spalevic, N. Sekulovic, A.
Stankovic, "Statistical Analysis of Ratio of Random
Variables and Its Application in Performance Analysis
of Multihop Wireless Transmissions", Mathematical
Problems in Engineering, 2012.
[8] Zoran Hadzi-Velkov, Nikola Zlatanov, and George K.
Karagiannidis, "On the second order statistics of the
multihop rayleigh fading channel", Communications,
IEEE Transactions, 57(6), pp 1815 1823, June 2009.
[9] N. Zlatanov, Z. H. Velkov, and G. K. Karagiannidis,
"Level crossing rate and average fade duration of the
double Nakagami-m random process and application in
MIMO
keyhole
fading
channels",
IEEE
Communications Letters, 12, 11, pp. 822-824, 2008.
[10] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals,
Series, and Products, Academic Press, New York, NY,
USA, 7th edition, 2007.

References
[1] A. Goldsmith, Wireless Communications, Cambridge
University Press, New York, NY, USA, 2005.
[2] J. Proakis, Digital Communications, 4thed. New York:
McGraw-Hill, 2001.
[3] M. K. Simon, M. S. Alouini, Digital Communication
over Fading Channels, USA: John Wiley & Sons.
2000.
[4] G. L. Stber, Principles of Mobile Communications,
Norwell, MA Kluwer Academic Publishers, 1996.
[5] Panic S, et al. (2013). Fading and Interference
Mitigation in Wireless Communications, USA, CRC
Press, 2013.
[6] E. S. Mekic, N. Sekulovic, M Bandjur, M. Stefanovic,
P. Spalevic, "The distribution of ratio of random
variable and product of two random variables and its
application in performance analysis of multi-hop

499

SOFTWARE AND INFORMATIONAL SYSTEMS IN THE PRODUCTION


OF DTM25 OF THE MILITARY GEOGRAPHICAL INSTITUTE
ALEKSANDAR PAVLOVI
Vojnogeografski institut, Beograd, aleksandarsrb@gmail.com
VIKTOR MARKOVI
Vojnogeografski institut, Beograd, viktor_bre@yahoo.com
ANA VUIEVI
Vojnogeografski institut, Beograd, vucicevic.ana@outlook.com
SAA BAKRA
Vojnogeografski institut, Beograd, sbakrac@yahoo.com

Abstract: This paper reviews description of the frame in the production of Digital Topographical map, scale of 1:25000
in the Military Geographical Institute. Presented frame has multiple advantages over traditional way of making, such
as: to foster development of the map, the possibility of human error in generating data and their graphical
representation, reduced cost of production, improving the quality of cards, the possibility of arbitrariness and
inconsistency in operation, easier and faster adaptation to customer requirements etc. Description of the information
systems, software, operating systems, and application programs are basic chapters of the paper.
Keywords: topographic map, Military Geographical Institute, spatial data, database, production of digital map.

1. INTRODUCTION
The expansion of digital technology, which brings the
development and massive use of information and
communication system, affecting all sectors of socioeconomic life, including the mapping industry. The
beginning of the 21st century in our country is marked by an
interruption of making cartographic products in the
traditional manner and the transition to digital, i.e.
Geographic Information System of the manufacturing
process, which brings digital spatial data as the basic
elements of the map. The transition from analog technology
to digital mapping, initiates a new organization in the
production chain, where some stage lose their importance,
while others appear as important continent in the overall
technological process of making digital maps.
In this paper will be presented an information framework for
the production of digital topographic map scale of 1: 25000
(hereinafter DTM25), whose production is currently ongoing
in the Military Geographical Institute (hereinafter referred to
as MGI), with special emphasis on applied software solution
and organization of domain computer network as one of the
main component in the new map production system. The
hardware component, as the basis for the production, in this
paper will not be considered.

2. INFORMATION SISTEMS
Information Systems (hereinafter referred to IS) in the
cartography are integrated set of databases, software and
hardware. When we say "an integrated set" it refers to the
500

existence of dependencies between these components so


that they can not be considered separately. In a broader
sense, the geographic information system represents the
integration processes of collecting, storing, processing,
display and distribution of spatial data, but also provides
the possibility of setting up interactive queries and data
analysis for different purposes [1].
The production of DTM25 is implemented in the Central
geospatial database (hereinafter CGBP). This makes it
possible to connect the generated data with other systems
(users), as well as "acceptance" of data from other
systems in the production process [4].
In the modern information systems, networks are serviceoriented, so the information system VGI DTM25 is also
service-oriented. This represents a distributed model,
where the server provides services to other computer
systems - to customers, a communication is enabled via
the intranet network. The advantages of this architecture
are numerous, the most important are: central
management and administration of network services and
clients, centralized security policy management, easier
data backups and others.
Based on above mentioned we come to the conclusion
that the server is crucial in the IS, because the complete
production process is based on a client-server architecture
[5]. Because of this importance of the server, in the
further text will be presented its performance in terms of
software (the term server can be presented in two ways, as
well as hardware and server software). The most
important services that the server allows to produce

SOFTWAREANDINFORMATIONALSYSTEMSINTHEPRODUCTIONOFDTM25OFTHEMILITARYGEOGRAPHICALINSTITUTE

DTM25 CGTBP are: Microsoft Active Directory, Image


Services, Print Services, File Services, Internet Access
Services, Security Services and others. Of these the first
two services have the greatest significance in the
production of DTM25 MGI.

OTEH2016

as a web service [6]. The addition of component of the


server, allows the creation of a dynamic mosaic and realtime processing of a large number of aerial images.
Publication of aerial images via image service provides
better quality, simplifies maintenance and management,
and allows faster, for a large number of users easier
access to data.

2.1. Microsoft Active Directory


Active Directory (AD) has become an essential service
for the needs of modern business. It functions as a central
place to store information about the identity of users,
computers and services, used to authenticate users and
computers and provides mechanisms by which users and
computers access network resources (printing, file
sharing, databases, etc.).

Benefits of implementing image of services are:

The main role is to provide the entire AD Identity and


Access infrastructure that includes tools and technologies
required to integrate people, processes and technology within
an organization. The most basic functionality of AD are:

Data processing takes place in real time on the server


from a single source, with no burden to clients'
computers;

Simplifies aerial image management and at the same


time allows publishing large collections of images
without preprocessing;
The possibility of making aerial images mosaic images
of different formats, projections, locations and
resolutions;

Authentication - Any user, computer, or other entity


defined in the network must first verify identity before
AD enable him to work as part of AD domain;
Access Control - AD stores all data (files and folders,
AD objects) and resources (computers, printers,
services ...) within the domain, ensuring that access to
resources is permitted only to those identities that these
resources should be made available ;

Publication of a special service at the level of metadata;


By cutting out the desired area and memorizing on a
local computer enables the use of unified aerial images
outside the computer network.

Monitoring - AD contains tools and mechanisms that


enable the monitoring, auditing and events reporting;
Increased security - application of AD significantly
increases the security of networks and information, as
information about the identities and their passwords are
stored encrypted in the central database. All users have
limited rights of access to computers and the network,
and they are prevented from installing new software
which largely prevents the installation of malware.
Simplified deployment - installing and uninstalling client
operating systems and applications for the entire group of
uniform computer can be performed from a central
location and in desired time;
The ability to implement AD of supported applications Some applications have the ability to implement the AD,
so their administration and management are integrated
into administration of AD.

Picture 1. Published Image Service, IS MGI

3. SOFTWARE
Software tools are composed of the programs that are
used to start the computer and peripherals (hardware), as
well as programs for the realization of the task such as
creation and editing of text or maps. Within the software
there are three basic components: the machine language
of computers, operating system and application programs.
Machine language is based on computer processor cycle
and is defined by binary alphabet. Symbols binary
alphabet are 0 and 1, each of which consists of one bit,
while a group of 8 bits form a byte. Operations and data
representing a series of computer bits and their meaning
depends on the computer architecture

1.2. Image Services


In the Production of DTM25 basic input data is a aerial
images. Also, as input data scanned and geo referenced
maps of earlier editions are used, which are not made by
digital technology. Aerial images are used in
photogrammetric restitution phase (3D and 2D). In order
to "serve" aerial images as a basis for further processing,
in recent years, as the most economical and best solution
proved their publication via imaging service [2] [3].

The operating system is a program that connects users of


computers and computer hardware. Its task is to provide
all necessary conditions for the execution of application
programs on the computer.

The world leader in software for GIS and cartographic


processing, the American company ESRI, within its
server component ArcGIS Server 10.1, has integrated
Image extension that is used for publishing image service
as a supplement to the desktop version of the software or

Application programs are used to solve individual tasks


and shall be drawn up for a specific application. Today
501

SOFTWAREANDINFORMATIONALSYSTEMSINTHEPRODUCTIONOFDTM25OFTHEMILITARYGEOGRAPHICALINSTITUTE

there are a large number of application programs that are


designed for specific tasks [3].

OTEH2016

enabling advanced processing geo data;


ArcGIS Production Mapping - dedicated client software
ArcGIS Extensions Info is intended for cartographic
publishing and

4. OPERATING SYSTEMS

ArcGIS ArcPad - software designed for field data


acquisition.

Operating systems are used for connecting computer users


and computer hardware and their task is to ensure the
execution of application software on the computer. When
choosing an operating system that will be installed on
workstations and servers, it is necessary to analyze each
position or requirement of the application software and
essential services of the computer network.

The big problem was a choice of software for 3D


restitution, because the requirement is that data that are
generated with it are compatible with the data that are
created with pre-selected software and is suitable for work
in CGTBP production. After testing different types of
software for 3D restitution, MGI has opted for the
software company ERDAS Stereo Analyst for ArcGIS.
This software is an extension of the software company
ESRI ArcGIS Editor, which improves it so that it can
collect, process and review 3D data.

On the operating system market, there are three generally


accepted operating system for which there is adequate
application software:
Microsoft Windows family of operating systems,
a large number of Linux distributions and
Apple family of operating systems.
MGI its technological line for the production of
DTM25 based on the Microsoft Windows family of
operating systems, i.e. Microsoft Windows XP,
Microsoft Windows 7 and Microsoft Windows Server
2008 R2.
However, as in the process of DTM25, in some phases of
the work, it is necessary to use the specialized software
(imagesetter, various network services) MGI uses and
operating systems from other manufacturers, but to a
much lesser extent.

5. APPLICATION PROGRAMS
Choice of application programs that need to be used in the
production of DTM25, and thus circle the entire
technology framework in one unit, is one of the most
important tasks in the process of establishing a
technological framework for the production of DTM25.
In the market of application software designed for GIS,
image data processing, map production and geospatial
databases there is a large offer, because there are a large
number of manufacturers and software with various
features and complexity.
In MGI was done testing of various software platforms
intended for the production of maps, their features, and
complexity of work, reliability, technical support and
directions of further development of this software
platform. After completion of testing and pilot projects,
MGI has chosen ESRI software platform company, which
leads to a completely new approach in the process of
creating geospatial databases. This software platform
consists of:
ArcGIS Server - server platform that enables entry,
processing and reading of data from different databases,
publication Image and Map Service, migrating data
from different database systems, etc .;
ArcGIS Editor - the client software that allows in the
process of DTM25 production collection, process and
review of the data;
ArcGIS Info - enhanced version of the client software
ArcGIS Editor, which contained numerous extensions

Picture 2. Example: Production of DTM25 in ArcGIS


environment supported by Informational System
components

6. CONCLUSION
The development of information and telecommunications
technology has greatly changed the way of making
topographic maps, which inevitably led to the evolution of
the process of making TM25 in our country. Presented frame
of processing technology DTM25 has multiple advantages
over traditional way of making [4]. Emphasis is placed on
the unification of software and hardware standardization, in
order to achieve better performance of the manufacturing
process. Among the most important benefits are: to foster
development of the map, the possibility of human error in
generating data and their graphical representation, reduced
cost of production, improving the quality of cards, the
possibility of arbitrariness and inconsistency in operation,
easier and faster adaptation to customer requirements etc.
The application of new technological solutions in the process
of DTK25 is enabled and its publication, production of the
classic printed forms and databases, as well as Web services,
which enables on-line application and update maps.
Because of the rapid technological development, the
framework is not final and unchangeable solution but
requires permanent monitoring, analysis and implementation
of new technological solutions in the future.

502

SOFTWAREANDINFORMATIONALSYSTEMSINTHEPRODUCTIONOFDTM25OFTHEMILITARYGEOGRAPHICALINSTITUTE

OTEH2016

VGI-b: putstvo za 3D restituciju, VGI-Interni document,


Beograd, 2014.
VGI-c: putstvo za 2D restituciju, VGI-Interni document,
Beograd, 2014.
Tatomirovi,S., Bankovi,R., Markovi,V.: Tehnoloki
process izrade digitalne topografske karte 1:25000, Oteh,
Beograd, 2009, pp. 748-751.
http://www.esri.com
http://www.leica-geosystems.com

In the end, it should be noted that, if the new technical


solutions, software and hardware platforms, has
significantly changed the technology mapping in all
phases of operation, classic cartographic principles are
reserved, especially in terms of mathematical map
elements.
References
Bankovi,R.,
Stojanovi,.:
Principi
reavanja
simbologije na digitalnoj TK25, Oteh, Beograd, 2007, pp.
722-727.

503

SECTION VII

Materials, Technologies and CBRN Protection

CHAIRMAN
Maja Vitorovi, PhD
Vencislav Grabulov, PhD
Zijah Burzi, PhD
Ljubica Radovi, PhD

THERMAL STABILITY AND MAGNETIC PROPERTIES OF -Fe2O3


POLYMORPH
VIOLETA N. NIKOLI
Institute of Nuclear Sciences Vina, University of Belgrade, Mike Petrovia Alasa 12-14, P.O. Box 522, 11001
Belgrade, violeta@vinca.rs
MARIN TADI
Institute of Nuclear Sciences Vina, University of Belgrade, Mike Petrovia Alasa 12-14, P.O. Box 522, 11001
Belgrade, marint@vinca.rs
VOJISLAV SPASOJEVI
Institute of Nuclear Sciences Vina, University of Belgrade, Mike Petrovia Alasa 12-14, P.O. Box 522, 11001
Belgrade, vojas@vinca.rs
Abstract: Based on its high room-temperature coercivity (Hc 20 kOe), high Curie temperature (Tc = 510 K), and
magnetoelectric character, -Fe2O3 became a material of great interest for potential application in spintronic devices.
This material can be used for construction of spintronic microwave diodes (SMD), important part of spintronic sensor
system for detection and analysis of radar threats for ground combat vehicles. In this study, two samples consisted Fe2O3 phase (11.6 and 14.1 wt% of Fe2O3), were prepared by sol-gel synthesis route. Synthesized materials were
characterized by XRD and FTIR techniques. Magnetic investigation was done by SQUID magnetometer. AC and DC
measurements, performed in order to investigate -Fe2O3 magnetic behavior, are discussed.
Keywords: -Fe2O3, sol-gel synthesis, characterization.
hindered by insufficient current knowledge. It is difficult
to understand the influence of synthesis parameters onto
the magnetic behavior of -Fe2O3 phase. To get deeper
insight into this problematic, two samples were
synthesized by modified self-catalyzed sol-gel synthesis
route [11]. Prepared samples contained close weight
percentages of Fe2O3 - differed in less than 3 wt%. With
the intention to compare behavior of examined samples,
further characterization was performed.

1. INTRODUCTION
Nanosized -Fe2O3 oxide is known as promising material
in fabrication of spintronic devices. The spintronic
devices have a wide application in a lot of different fields
[1-4]. Appropriate materials for production of spintronics
needs to exhibit high coercivity (Hc), high Curie
temperature (Tc), ferromagnetic ordering at room
temperature, and to exert magnetoelectric effect. -Fe2O3
phase showed superior properties in comparison to other
iron oxides (Hc ~ 2T, Tc = 510 K), as well as in
comparison with materials used for this purpose: BiFeO3,
BiMnO3 [5], Cr2O3 [6], GaFeO3 [7]. Due to the numbered
reasons, the -Fe2O3 polymorph is recognized as a
potential material for construction of spintronic sensor
system. This system presents an important part of radar
detectors [8]. Sensor system allowed collection of signals,
detected by giant magnetoresistive (GMR) effect [9].

X-ray diffraction (XRD) measurements were conducted


by a Rigaku RINT-TTRIII using CuK ( = 1.5418 A).
Diffraction patterns were recorded over the 2 range: 1070o, with a scanning rate 0.02 o/min. Findit Program was
used for data analysis. Fourier transformed infrared
(FTIR) spectroscopy was performed by Nicolet IS 50
FTIR Spectrometer, in the wavenumber range: 4000400
cm-1. Sampling technique used to collect spectra was
attenuated total reflectance (ATR). Thermal analysis (TA)
measurements were performed to clarify the mechanism
of the forming and transformation of - Fe2O3 phase. TA
investigation was done by Thermal Analyzer TA-SDT
2090, in the temperature interval: 30-1100 oC, with
heating rate 20 oC /min, up to 1000 oC. In temperature
interval from 1000-1050 oC, sample was heated with the
heating rate 10 oC /min. Magnetic measurements were
done by Quantum Design MPMS 5XL superconducting
quantum interference device magnetometer (SQUID).
Hysteresis loops were measured at 200 K. Alternating
current (AC) measurements were performed with the

From the other side, implementation of the epsilon phase


in devices was challenged, for many years, by the absence
of -Fe2O3 phase prepared in the form of thin films.
Although this challenge is overcame, -Fe2O3 oxide in the
form of epitaxially stabilized film on an appropriate
substrate exerted coercivity value 0.8 T, what is much
lower than the Hc value characteristic for -Fe2O3 phase
prepared in the form of nanoparticles [10]. This fact
pointing to insufficiently investigated correlation of
synthesis parameters and magnetic properties of -Fe2O3
phase. Precise control of its magnetic properties is
507

OTEH2016

THERMALSTABILITYANDMAGNETICPROPERTIESOFFe2O3POLYMORPH

characteristic for an -Fe2O3 phase (JCPDS:72-469,


rhombohedral crystal structure, space group R3c).
Reflections of another phase were observed on the
positions corresponded to the - Fe2O3 phase (JCPDS:16653, orthorhombic structure, space group Pna21). Narrow,
high peaks of the -Fe2O3 phase pointing to the very good
crystallinity of hematite particles, presented in excess,
while the weak strength and the relatively broad width of
the peaks indicated formation of smaller - Fe2O3
nanoparticles. Mean crystallite sizes of the -Fe2O3 and Fe2O3 nanoparticles were estimated according to Scherrer
equation, by RIGAKU software PDXL Powder
diffraction analysis (curves were fitted to Split pseudoVoight peak shape), and presented in Table 1.

applied frequencies: 1 Hz, 5 Hz, 10 Hz, 12 Hz, 24 Hz, 96


Hz, 120 Hz, 246 Hz, 481 Hz and 957 Hz.

2. EXPERIMENTAL RESULTS AND


DISCUSSION
2.1. Synthesis
Two samples were prepared according to modified sol-gel
procedure described elsewhere [11]. This is the first time
that the Fe(NO3)3x9H2O was used as a precursor of Fe3+
ions, in order to synthesize -Fe2O3 phase. Catalyst
solution, containing the iron ions was prepared by
dissolving 2 g of Fe(NO3)3x9H2O in 6.67 ml of H2O.
Alkoxide solution containing 1.1143 ml TEOS, 0.54 ml
H2O, and 1.75 ml C2H5OH. After mixing the alkoxide
and catalyst solutions, content of glass was stirred 5 h at
room temperature, to obtain a gel. After gelation (30
days), gel was dried for 19 h at 80 oC. Synthesis of
another sample was done by the same procedure, only 2.5
g Fe(NO3)3x9H2O was used as initial precursor. Samples
consisted of 11.6 wt% Fe2O3 and 14.1 wt% Fe2O3,
respectively. First sample is denoted as S11.6% in the paper,
while second sample is named as S14.1%.

Table 1. Mean crystallite sizes


Fe2O3.
Fe2O3 phase
Wt% Fe2O3
-Fe2O3
14.1
-Fe2O3
14.1
-Fe2O3
11.6
-Fe2O3
11.6

of the -Fe2O3 and 2


33.114
36.555
33.020
36.436

dc [nm]
36
19.6
33.4
24.3

Sample S11.6% exhibited more pronounced diffraction


maxima attributed to epsilon phase, indicated higher
amount of epsilon polymorph, in comparison to sample
S14.1%. Popovici et al reported that the quantity of the
epsilon phase rose with decrease of wt% of Fe2O3 [11]. In
accordance with that, observed diffraction patterns
ascertained literature data.

2.2. XRD investigation


Structural characterization of the investigated samples
was performed by XRD technique. XRD patterns
confirmed similar structure of the samples S14.1% and
S11.6%. Diffraction patterns are represented at Fig.1a) and
b), respectively. At lower 2 angles, Fig.1a) depicted
appearance of amorphous SiO2 matrix [11].

2.3. FTIR analysis


FTIR spectroscopy is less informative technique for
detection and distinguishing Fe2O3 and SiO2 particles, as a
positions of SiO2 bonds overlap with positions which
could be ascribed to Fe2O3 phase. Nevertheless, FTIR
measurements provide information about the surface of
the investigated samples. Fig. 2 presented FTIR spectra of
the samples annealed at 1050 oC for 3h. Similarities in
composition between investigated samples are evident in
the spectra. Pronounced IR peak at 1083 cm-1, as well as a
weaker peak at 792 cm-1, are attributed to Si-O bond
vibration, pointing to the appearance of SiO2 matrix [14].
From the other side, sharp minima at 535 cm-1 and 440
cm-1 resulted from the Fe-O vibrations characteristic for
the - Fe2O3 phase [15]. Results of FTIR analysis are in
accordance with XRD investigation (Figure 1), although
FTIR technique is not appropriate tool for distinguishing
iron oxide phases.

Figure 1. a) XRD pattern of the S11.6%, annealed 3 h at


1050oC.

Figure 1. b) XRD pattern of the S14.1%,, annealed 3 h at


1050oC.
Figure 2. FTIR spectra of the samples (S11.6% black line,
S14.1% red line), annealed at 1050 oC 3h.

Patterns of the samples subjected to the same annealing


conditions exhibited strong Bragg peaks at 2 angles
508

OTEH2016

THERMALSTABILITYANDMAGNETICPROPERTIESOFFe2O3POLYMORPH

2.4. TA analysis

11.6
11.6
11.6
11.6
11.6

Thermogravimetric (TG) and differential thermal analysis


(DTA) curves of the investigated samples S14.1% and S11.6%
are depicted at Fig.3 and Fig.4, respectively. TG analysis
of the samples revealed that both samples exerted sharp
mass losses in the first 300oC annealing process (Table 2).
The experimental weight loss pronounced around 60 wt%
of the initial mass of the samples. Afterwards curves
achieved plateau. Further annealing (up to 1050oC)
resulted in much lower, but observable, mass losses
(Table 2).

93
150
210
578
1050

Endothermic
Endothermic
Endothermic
Endothermic
Exothermic

Pronounced minima at lower temperatures (30-330oC) are


ascribed to elimination of chemically or physically
adsorbed H2O and CO into the matrix, in both
investigated samples [16]. Sample S11.6% exerted
minimum at 210oC. Observed minimum arise as a
consequence of the first stage of thermal decomposition
of nitrate salt into NO gases [17]. Observed processes at
higher temperatures are presented due to the thermal
decomposition of the SiO2 matrix. The endothermic
process observed at 578oC is characteristic for both
samples, and ascribed to volatile of the solvent placed in
the pores of silica matrix, what initiates the densification
of the silica [18]. The all mentioned processes are
accompanied by mass losses (Table 1).
At 1050oC started an exothermic process, attributed to the
progression of SiO2 matrix towards a glassy structure
[19]. Although transformation of amorphous SiO2 to
crystalline begun at denoted temperature, the process has
not been finished even after 3 h annealing (XRD analysis,
Fig.1a). Nevertheless, the transformation of the SiO2
allowed diffusion of different gasses through the matrix.
The result of gasses diffusion is -Fe2O3 -Fe2O3
conversion. This process is masked by the SiO2 matrix
modification, although occurred simultaneously.

Figure 3. TA curves of the sample S14.1%.

To distinguish whether the observed process is


accompanied with zero mass losses, isothermal
thermogravimetric method was applied. Measurement
was performed at 1050oC and the sample mass loss was
measured as a function of time (Fig.3).

Figure 4. TA curves of the sample S11.6%.


Table 2. The mass losses in the temperature range 301050oC.
Wt % Fe2O3 Mass losses/% Temperature range/ oC
14.1
59.0
30-330
14.1
3.0
330-1050
11.6
64.6
30-330
11.6
2.0
330-1050
To determine, whether mass losses were accompanied by
an endothermic or exothermic process, DTA curve was
recorded. The peaks of DTA curve of the sample S11.6%
exhibited better pronounced peaks then DTA curve of the
sample S14.1% (Fig.3 and Fig.4). This is explained by
different wt% Fe2O3 which samples consisted, as the wt%
Fe2O3 determined the ratio of -Fe2O3 and - Fe2O3 phase
[11]. Table 3 presented temperatures at which different
processes were recorded by DTA measurements.

Figure 5. TG curve of the sample consisted of 11.6 wt%


Fe2O3, recorded at 1050oC for 90 minutes.
Fig.5. revealed that sample loss 3.7 wt% of its initial mass
during 90 minutes treatment.
The -Fe2O3 and -Fe2O3 phases exhibited various
structures, which main characteristics (number of formula
units (Fe2O3) in unit cell, volume of unit cell, structure
and space group) are showed in Table 4. According to
their structural characteristics, the observed mass losses
pointing to the influence of the SiO2 matrix onto the Fe2O3 - Fe2O3 phase transformation, which arise as a
consequence of the altering of an anion positions at high
temperatures:
from
oxygen
nonstoichiometry

Table 3. Results of DTA analysis of the samples. Curves


are recorded in temperature range 30-1050oC.
Transformation
Type of process
Wt %
temperature [Tt] / oC
occurred at Tt
Fe2O3
14.1
170
Endothermic
14.1
578
Endothermic

509

THERMALSTABILITYANDMAGNETICPROPERTIESOFFe2O3POLYMORPH

OTEH2016

(characteristic for epsilon structure) to stoichiometry


anion positions, presented in the structure of -Fe2O3
polymorph. The alteration of anion positions is caused by
diffusion of released gasses through SiO2 matrix, which
constantly
affects
the
alteration
of
oxygen
nonstoichiometry [20].

decrease for 0.78 T. Although -Fe2O3 nanoparticles


exerted high crystallinity (Fig.1), lowering of the
coercivity arises due to:

Table 4. Differences in the structure of -Fe2O3 and Fe2O3 phase [21].

as a consequence of the forming huge agglomerates.

Fe2O3
oxide

Formula
units
number

Unit cell
volume
/pm3

Unit cell
structure

Space
group

-Fe2O3

5.29*106

orthorhombic

Pna21

-Fe2O3

5.04*106

rhombohedral

R3c

the use of hydrated iron nitrate precursor (the excess of


water in synthesis influenced the distribution of the
particles, and structure of SiO2 matrix)
To confirm the presence of -Fe2O3 phase in the sample
S14.1%, AC measurements were performed. Temperature
dependence of real and imaginary part of susceptibility is
depicted at Fig.7a) and b). Both components are active
with temperature. An observed sharp anomaly is
characteristic for epsilon phase, and occurred around ~
100 K.
To explain behavior of the anomaly peak, the shift of
frequency (f)-dependent a.c. susceptibility was
investigated. Kurmoo et al. [22] observed shift of the
anomaly peak to the lower temperatures with increasing fvalue, while Brazda group reported shift of the a.c.
anomaly peak to higher temperatures with frequency
increase [23]. In our case is observed shift of the anomaly
peak to the lower temperatures with rose of frequencies,
Fig.7. Fig.7a) and b) presented normalized susceptibility
temperature dependence, depicted to the behavior of
anomaly peak.

2.5. Magnetic measurements


Magnetic measurements are performed to probe magnetic
behavior of -Fe2O3 phase. M(H) curves of the samples
S11.6% and S14.1%, annealed at 1050oC for 3 h, are presented
at Fig.6a) and b), respectively. Magnetic field is presented
in the function of normalized magnetization.

Figure 6. a) Hysteresis of the sample S11.6% Fe2O3


annealed 3 h at 1050oC.

Figure 7. a) Normalized '(T) curves of the sample S14.1%


annealed 3 h at 1050 oC.

Figure 6. b) Hysteresis of the sample S14.1% annealed 3 h


at 1050oC.

Figure 7. b) '' (T) curves of the sample S14.1% annealed 3


h at 1050oC.

Hysteresis of the sample presented at Fig. 6a), exerted


coercivity of 1.43 T what undoubtedly confirmed
appearance of -Fe2O3 phase in the investigated sample,
while hysteresis loop of the another sample, Fig.7b),
exhibited coercivity of 0.65 T. In the literature is reported
pronounced decrease of the coercivity value characteristic
for the epsilon phase in the samples consisted more than
35 wt% Fe2O3 [11]. In our case, the alteration of wt% in
the range of 11-15 wt% Fe2O3, resulted in the coercivity

From the other side, the additional, very weak shoulder


below 50 K indicated presence of the another phase, Fe2O3, and pointing to the superparamagnetic (SPM)
contribution. During annealing process at high
temperatures, -Fe2O3 -Fe2O3 phase transformation
occurred. Observed dynamic confirmed formation of the
certain amount of superparamagnetic nm -Fe2O3 phase.
Results of the AC magnetic measurements are in
510

THERMALSTABILITYANDMAGNETICPROPERTIESOFFe2O3POLYMORPH

accordance with XRD analysis.

OTEH2016
Binek,C.: Magnetoelectric switching of exchange bias,
Phys. Rev. Lett., 94(11) (2005) 1-4.
Rado,G.T.: Observation and Possible Mechanisms of
Magnetoelectric Effects in a Ferromagnet, Phys. Rev.
Lett., 10(13) (1964) 335-337. F
http://www.army.mil/article/144834/Scientists_Develop_
Novel__Spintronic__Sensors_for_the_Army/
Baibich,M.N., Broto,J.M., Fert,F., Van Dau,N., Petroff,F.,
Etienne,P., Creuzet,G., Friederich,A., Chazelas,J.: Giant
Magnetoresistance of
(001)Fe/(001)Cr Magnetic
Superlattices, Phys. Rev. Lett. 21(61) (1988) 2472-2475.
Gich,M., Gazquez,J., Roig,A., Crespi,A., Fontcuberta,J.,
Idrobo,J.C., Pennycook,S.J., Varela,M., Skumryev,V.,
Varela,M.: Epitaxial stabilization of -Fe2O3 (001) thin
films on SrTiO3 (111), Appl. Phys. Lett. 11(96) (2010)
112508-112514.
Popovici,M., Gich,M., Niznansky,D., Roig,A., Savii,C.,
Casas,L., Molins,E., Enache,C., Sort,J., Brion,S.,
Chouteau,G.N.: Optimized Synthesis of the Elusive EFe2O3 Phase via Sol-Gel Chemistry, J. Chem.
Mater.,25(16)(2004),55445548.
Tomita,K., Hashimoto,K., Yahuro,H., Okhoshi,S.:
Preparation of the Nanowire for of - Fe2O3 Single
Crystal and a Study of the formation Process, J. Phys.
Chem. C, 112(51) (2008), 2021220216.
Gich,M., Gazquez,J., Roig,A., Crespi,A., Fontcuberta,J.,
Idrobo,J.C., Pennycook,S.J., Varela,M., Skumryev,V.,
Varela,M.: Epitaxial stabilization of -Fe2O3 001 thin
films on SrTiO3 111, Appl. Phys. Lett., 11(96) (2010),
112508-112513.
Spectroscopy Letters:. Rubio,F., Rubio,J., Oteo,J.L.: An
International Journal for Rapid Communication, A FT-IR
Study of the Hydrolysis of Tetraethylorthosilicate (TEOS),
Spectroscopy Letters, 31(1) (1998), 199-219.
Zhao,B., Wang,Y., Guo,H., Wang,J., He,Y., Jiao,Z.,
Wu,M.: Iron oxide(III) nanoparticles fabricated by
electron beam irradiation method, Materials SciencePoland, 4(25) (2007) 1145-1149.
Ye,X., Lin,D., Jiao,Z., Zhang,L.: The thermal stability of
nanocrystalline maghemite Fe2O3, J. Phys. D: Appl. Phys.
4(31) (1998) 27392744.
Pacewsk,B., Keshr,M.: Thermal transformations of
aluminium nitrate hydrate, Thermochimica Acta, 12(385)
(2002) 7380.
Gich,M.K.C., Barick,K.C., Varaprasad,B.S. D.Ch.S.,
Bahadur,D.: Structural and magnetic properties of - and
-Fe2O3 nanoparticles dispersed in silica matrix", J. NonCryst. Solids 2(356) (2010) 153-159.
ernoek,Z., Holubova,J., ernokova,E., Lika,M.:
Enthalpic relaxation and the glass transition, J of Optoel.
and Adv. Mater., 3(4) (2002) 489-503.
Machala,L.,
Tucek,J.,
ZborilR.:
Polymorphous
Transformations of Nanometric Iron(III) Oxide: A
Review, Chem. Mater., 14(23) (2011) 3255-3272.
Tronc,E., Chaneac,C., Jolivet,J.P.: Structural and
Magnetic Characterization of -Fe2O3, J. Solid State
Chem., 1(139) (1998) 93-104.
Kurmoo,M., Rehspringer,J.L., Hutlova,A., DOrleans,C.,
Vilminot,S., Estournes,C., Niznansky,D.: Formation of

3. CONCLUSION
The present study provides a modified method of sol-gel
synthesis. Two samples, considered the 11.6 wt% and
14.1 wt% Fe2O3 were prepared. XRD analysis confirmed
presence of nm-sized -Fe2O3 and -Fe2O3 phase in the
samples obtained after 3 h annealing at 1050oC.
Isothermal TG measurements revealed an experimental
evidence for the exothermic process observed at 1050oC,
during first 90 minutes annealing process. This process
could be ascribed to the modification of SiO2 matrix,
which, circumstantially, resulted in the -Fe2O3 Fe2O3 phase transformation. The phase transformation is
attributed to the alteration of oxygen non stoichiometry,
arose as a consequence of the released gasses transfer
through the SiO2 matrix. The samples annealed 3 h at
1050oC exhibited coercive field of 1.43 T and 0.65 T,
dependent from the wt% Fe2O3. An alteration of the wt
percentage in less than 3 wt%, resulted in the decrease of
the coercivity for 0.78 T. The AC magnetic measurements
exerted anomaly peak around 100 K. Peak shifted to the
lower temperatures with frequency, pointing to the
softening of material, while additional peak centered
around 50 K denotes presence of the -Fe2O3 phase.
Although the influence of synthesis parameters onto the
magnetism of the -Fe2O3 phase is still an open question,
this study improves the current knowledge in this area.
Further efforts will be conducted to advance the control of
epsilon phase magnetic properties, with the aim to
improve implementation of -Fe2O3 phase in commercial
products.
Acknowledgements
We thank Dr Mirjana Mili and Dr Slaana Novakovi
for useful discussions. This work has been supported by
the Ministry of Education, Science and Technological
Development, Republic of Serbia (Project No. III 45015).
References
Rife,J.,C., Miller,M., M., Sheehan,P.E., Tamanaha,C.R.,
Tondra,M., Whitman,L.J: Design and Performance of
GMR Sensors for the Detection of Magnetic Microbeads
in Biosensors, Sensors and Actuators A: Physical, 107(3)
(1995): 209303.
http://www.faculty.rsu.edu/users/c/clayton/www/carman/
paper.html
Wolf,S.A.,
Chtchelkanova,A.Y.,
Treger,D.M.:
SpintronicsA Retrospective and Perspective, IBM
Journal of Research and Development 50(1) (2006) 101
110.
"U.S. Missile Defense Agency (Washington, DC:
Department of Defense)", 2004 Technology Applications
Report, (2005) 4445.
Sharan,A., Ilsin,A., Chen,C., Collins,R., Lettieri,Jia,Y.J.,
Scholm,D., Gopalan,V.: Large optical nonlinearities in
BiMnO3 thin films, Appl. Phys. Lett., 25(83) (2003)
5169-5174.
Borisov,P., Hochstrat,A., Chen,X., Kleemann,W.,
511

THERMALSTABILITYANDMAGNETICPROPERTIESOFFe2O3POLYMORPH

OTEH2016
Brazda,P., Niznansky,D., Rehspringer,J.L., PoltierovaVejpravova,J.: Novel sol-gel method for preparation of
high concentration -Fe2O3/SiO2 nanocomposite, J SolGel Sci Technol, 2(51) (2009) 78-83.

nanoparticles of -Fe2O3 from yttrium iron garnet in a


silica matrix: an unusually hard magnet with a Morin-like
transition below 150K", Chem. Mater. 5(17) (2005) 11061114.

512

TECHNOLOGY FOR COMBATING BIOTERRORISM


ELIZABETA RISTANOVIC
Military Medical Academy, University of Defence, Belgrade, elizabet@eunet.rs

Abstract: The use of microorganisms and toxins derived from live organisms as biological warfare agents or weapons
of terrorism are real and actual threat in security architecture of the modern world. BW agents are unique and different
from all other weapon system. The broad spectrum of possible psychophysical and environmental consequences that
BW could provoke and their further implications make them as strategic threats. Rapid science and technology
(biotechnology, nanotechnology, genetic engeenering) development and potential misuse of their achievments makes
the situation more serious. In this paper we discuss how the technology development can affect the bioterrorism and
biodefense regarding to the identity of the agents, equipment necessary for their production, containment, purification,
stabilization and weaponization; BW agents dissemination as well as the equipment for detection, warning, rapid and
specific molecular identification of biological agents and individual and collective biological defense systems including
decontamination measures, protective masks and suits, immunization measures, antibiotic prophylaxis etc.
Keywords: biological warfare (BW), bioterrorism, technology, biodefence.
During the WWI the causative agents of zoonoses
(anthrax, tularemia, glanders) were used against humans
and animals that was very important in military logistics.

1. INTRODUCTION
Biological weapon (BW) agents are naturally occurring or
genetically-modified self-generating microorganisms
(bacteria, viruses, fungi, parasites) or toxins (derived by
microorganisms, plants or animals) that can cause disease
and death in a target population or attack the food supply
system. The characteristics of microorganisms i.e. small
amount that is necessary to cause infection, their self
reproducibility (except toxins), wide availability, the
relative low-cost required for production, the ease of
deployment and possible consequences they could
provoke make these agents different from all other
weapon systems. BW can be treated as a strategic and
disorganizing threat. All of the equipment used to produce
biological agents is dual use so that biological agents can
be easy produced and deployed by biomedical,
pharmaceutical, and food production facilites.[1]

In the WWII started rapid development of biological


programmes in the USA, former Soviet Union and other
countries. The world was confronted with the first
biological warfare, so in 1972 was signed international
agreement - the Convention of the Prohibition of the
Development,
Production,
and
Stockpiling
of
Bacteriological and Toxin Weapons and their Destruction
(Biological Weapons Convention (BWC)). [2]
Nowadays, bioterrorism also presents global and serious
threat in the security architecture of modern world. Newly
emerging infectious diseases - such as AIDS, prion
disorders, hemorrhagic fever such as Ebola, Denga, as
well as very actual - Zika virus complicate the problem
fostering the need for their monitoring, prevention and
reliable detection and identification using new
achievements of the science and technology. Fighting
against bioterrorism requires cooperation of intelligencesecurity, health, academic sector and decision makares, as
well as intensive international cooperation in this field.

The use of BW agents is as old as human society. It was


registered BC when the decaying corps of humans and
animals dead from infective disorders were placed near
food and water supllies. Over the centuries smallpox,
plaque and other infective diseases were used against
people.

The rapid development of science and technology,


especially genetic engeenering, biotechnology and
nanotechnology during the past years has rapidly changed
the possible impacts of BW on people, military forces and
communities as well as the possibilities for combating
bioterrorism. The achivements of these technologies can
be used for 1) further development of biological agents
increasing their virulence and stability after deployment;
2) production of pathogenic organisms from nonpathogenic strains or genetic modification of the microbes
in order to complicate detection of a biological agent or
make the vaccines and antibiotics ineffective; 3)
modifying the immune response system of the target
population to increase or decrease susceptibility to

Picture 1. Smallpox clinical manifestations during the


Yugoslav outbreak in 1972 (left) and its causative agentvariola virus (right)
513

OTEH2016

TECHNOLOGYFORCOMBATINGBIOTERRORISM

pathogens; 4) advancing the delivery of microorganisms


to their targets; 5) protecting personnel against biological
agents especially the respiratory system from aerosols, the
primary delivering mechanism; 6) production of vaccines;
7) producing sensors based on the detection of unique
signature molecules on the surface of biological agents
using monoclonal antibodies or on the interaction of the
genetic materials in such organisms with produced gene
probes for polymerase chain reaction (PCR), DNA
hybridization, DNA microchips and other procedures that
can be used for identification. [3]

Genetic engineering enables change of the wild-type


microorganisms enabling the incorporation of genes,
specific DNA segments responsible for production of
toxin or other pathogenicity factors into non-pathogenic
strains. Vice-versa, known pathogens or toxins may be
genetically inactivated hindering the effects of vaccines
and therapeutics. Cells can also be modified to produce
specific antibodies for passive immunization against some
infectious agents. These antibodies could interfere with
BW agents detection kits based on detection of microbial
surface antigens thus disabling specific detection and
increasing the agents effectiveness. Survivability of a
pathogen
under
the
environmental
conditions
(temperature, ultraviolet (UV) radiation, drying) can also
be genetically improved promoting their stability during
dissemination. The development and insertion of
conditional suicide genes, could program an organism
to die off following a predetermined number of
replications enabling the safely reoccupation of the
affected area after a predetermined period of time.
Genome sequencing has given rise to a new generation of
genetically engineered bioweapons carrying the potential
to change the nature of modern warfare and defense. As
researchers continue to transition from the era of DNA
sequencing into the era of DNA synthesis, it may soon
become feasible to synthesize any microorganism whose
DNA sequence is known and
creation of novel
pathogens. Genetically engineered pathogens could be
made safer to handle, easier to distribute, capable of
ethnic specificity, or be made to cause higher mortality
rates. [5]

Picture 2. Progress in biotechnology applicable for


improvement of bioweapons and biodefence
The development of biological industry is intensive and
data on technologies involved in biological production are
widely available. These technologies are dual use, with
applications in the pharmaceutical, food, cosmetic, and
pesticide industries as well as for potential military or
terrorist purpose. Because of that these facilities and
technologies are subjects of international and multilateral
controle regimes concerning potential BWs and other
non-proliferative issues.

Biotechnology may be used to manipulate cellular


mechanisms to cause disease, so an agent could be
designed to induce cells to multiply uncontrollably, or to
initiate apoptosis, programmed cell death. In coming
years it may be conceivable to design a pathogen that
targets a specific persons genome. [6]
Scientists have been able to transform the four letters of
DNAA (adenine), C (cytosine), G (guanine), and T
(thymine)into the ones and zeroes of binary code. This
transformation makes genetic engineering a matter of
electronic manipulation, which decreases the cost of the
technique.

2. THE IMPACT OF TECHNOLOGY TO


BIOLOGICAL AGENTS
The awareness about the impact of biological warfare and
its possible consequences has raised markedly in the past
15 years, after the anthrax campaigne in the US in 2001.
The opportunities of the new technologies in the
production cycle and preparation of potential BW give
significant contribution to this threat.

The following phases in BW production are stabilization


and dissemination/dispersion that are important issues
because of the susceptibility of the BW agents to
environmental degradation provoked by exposure to high
physical and chemical stress or exposure to specific
inactivating agents that can result in the loss of biactivity.
Infectious BW agents are generally stabilized and then
spray dried. A toxin agent is most effective when
prepared as a freeze-dried powder and encapsulated. The
primary means of stabilization for storage or packaging
are initial concentration
using vacuum filtration,
ultrafiltration, precipitation, and centrifugation; direct
freeze drying (lyophilization) that is method of choice for
long-term storage of bacterial cultures that can be easily
rehydrated and cultured even after 30 year; direct spray
drying; formulation into a special stabilizing solid, liquid,
or sometimes gaseous solution; deep freezing of

Microorganisms, possible BW agents are widely


distributed in the natural environment and can be also
found in culture collections(e.g. American Type Culture
Collection [ATCC], European collections).
Some specific features make microorganisms very
attractive as possible BW agents. Thus, approximately 10
grams of anthrax spores can kill as many persons as a ton
of sarin. Under appropriate meteorological conditions, a
a single aircraft can disperse 100 kg of anthrax over a 300
km2 area and theoretically cause 3 million deaths in a
population density of 10,000 people per km2. The use of
genetic engineering and biotechnology can make them
even more effective. [4]
514

OTEH2016

TECHNOLOGYFORCOMBATINGBIOTERRORISM

contained products in liquid nitrogen refrigerators (


196 C) or low temperature mechanical freezers (70 C)
with cryoprotective agents.

based on physical or chemical properties of biological


agents include high-performance liquid and gas
chromotography, mass spectrometry, and ion mobility
spectrometry (IMS). Detectors for BW agents must have
a short response time (less than 30 minutes) with a low
false alarm rate. Detection equipment must be integrated
with a command and control system. Early warning is
essential to avoid contamination. Agent location,
intensity, and duration are crucial parameters for
command decisions.

Dissemination of BW agents has been usually


accomplished by aerosol dispersal using either spray
devices or through explosive devices (cluster bombs,
missile warheads with submunitions designed for
extended biological agent dispersal) that must be
approached with caution since heat-generating explosions,
can inactivate microorganisms/toxins. So, the preferred
approach is dispersion via the use of a pressurized gas in a
submunition or the use of small rotary-wing vehicles,
fixed-wing aircraft fitted with spray tanks, drones,
bomblets, cruise missiles, and high-speed missiles with
bomblet warheads or fixed-wing aircraft and ground
vehicles with aerosol generators. The effective use of BW
agents requires analysis of meteorological conditions and
the mapping of the target. Terrorists can also disperse BW
agents by manual aerosol generators.The use of
technology can improve dissemination means. [7]

Detection and identification of biological agents allow


commanders to take steps to avoid contamination, to
determine the appropriate protection for continued
operations, and to initiate proper prophylaxis and therapy
to minimize casualties and performance degradation.
Identification systems are critical to medical response. [8]
No single sensor or biodetection system that currently
exist can detects all agents of interest. Systems in the
advanced stages of development warn that a biological
attack has occurred and collect samples for subsequent
laboratory analysis, but real-time, on-site detection
systems are not available. The rapid progress in
biotechnology can give contribution in this field although
it can be misused for improvement of BW agents.

Aerosol dispersal allows control of particle size and ensity


to maximize protection from environmental degradation
and uptake of the agents with a diameter of 115 m in
the lungs of targeted populations by inhalation, the
primary mode of infection by most BW agents. Some
agents can also be dispersed by ingestion of contaminated
food or water.

3. THE ROLE OF TECHNOLOGY IN


COMBATING BIOTERRORISM
3.1. BW warning, detection and identification
system
Warning, detection and identification are first line of
reaction to possible biological attack including detection
of signals by sensors and its transduction to a transponder.
Most detection and warning systems are based on
physical or chemical properties of biological agents.
Stand-off detectors provide early warning using
spectroscope-based monitor of materials containing
nucleic acid/protein with registered absorbance on the
wavelenght range between 230285 nm and they are not
specific for infective agents and their toxins.

Picture 4. Nanotechnology to combat BW agents:


biosensor array for detection of anthrax or plague agents
(silicon electrical sensors containing pores ranging from
a few nanometers to microns whose complex impedance
responds in seconds to exposure to various targets each
with a specific signature)

3.2. Defense systems and technologies


Standards of protection could vary from minimal to
sophisticated. Self-protection defensive measures would
be easiest to take in an offensive attack because the
attacker would know in advance what BW would be used
and could protected the own personnell while attacked
country would encounter problems owing to unknown
agent used at an unspecified place for an undetermined
duration. In such circumstances, immunization
requirements would have to be determined by intelligence
reports of enemy capabilities. Some type of detection
would be needed to alert forces to take protective
measures.

Picture 3. Devices for detection and identification of BW


The point detectors include dipstick kits selective for
some potential BW agents or multiarray sensors with
antibodies generated against BW agents or gene
sequences complementary to BW agents. Sensor systems

The individuals can be protected from BW by providing


prophylactic treatment before deployment into a risk area,
515

OTEH2016

TECHNOLOGYFORCOMBATINGBIOTERRORISM

by providing full respiratory protection during potential


exposure to the agent, or by using pharmacological,
physical, or biomedical antidotes to threat agents shortly
after exposure. Prophylaxis of the individual is
accomplished by active or passive immunization, using
the attenuated or dead biological agent, the fragment of
the toxin/biological agent, multivalent or DNA vaccines
as well as specific antibodies. The use of biological
response modifiers (BRM) such as interferons and
interleukins at the appropriate time can also mobilize the
immune system in a normal individual to respond to the
given microbial antigen enhancing. Immunosuppressants
are class of BRMs that cause subjects to become
immunocompromised or more susceptible to infection
that is important in offensive BW . [9]

bioweapons. In addition, the emerging tools of genetic


knowledge and biological technology may also be used as
a means of defense against BWs. The exponential
increase in computational power combined with the
accessibility of genetic information and biological tools to
the general public and lack of governmental regulation
raise concerns about the threat of biowarfare for terrorist
purposes.

References
[1] Federation of American Scientists, Introduction to
Biological Weapons (2011). Available at
http://www.fas.org/programs/bio/bwintro.html (28
December 2012).
[2] Biological and Toxin Weapons Convention (BTWC).
Avail. at: http://www.opbw.org/
[3] Ainscough,M.: Next Generation Bioweapons:
Genetic Engineering and Biowarfare (2002).
Available
at
http://www.au.af.mil/au/awc/awcgate/cpcpubs/biostorm/ainscough.pdf .
[4] Inglesby,T.V., Henderson,D.A., John,G. Bartlett JG,
et all (1999); Anthrax as a biological weapon.
JAMA. 281:1735-1745.
[5] Kay D.(2003).Genetically Engineered Bioweapons

Biological protection requires only respiratory and eye


protection. The protective requirements include resistance
to the penetration of BW or toxin materials, filtration of
inflow air to remove particles containing the agents, and
cooling of the interior compartment.Current clothing and
mask systems act as a barrier between the agent and the
respiratory system or mucosal tissues of the target and
they do not inactivate the agent. [10] For biological
protection, such clothing is sufficient but is not
comfortable. Visual field of view is decreased and the
head mask results in discomfort because of temperature
increase and fogging.

Available at http://www.aaas.org/spp/yearbook/2003/ch17.pdf

[6] Hessel A, Goodman M, Kotler S. Hacking the


Presidents DNA. The Atlantic, 2012.
[7] Ristanovic E, Jevtic M.(2010) Dual-Use Goods in the
Production of Biological Weapons. JMedCBR.
8:304-309.
[8] Ristanovic E. Bioterrorism:prevention and response.
Media Center Odbrana, University of Defence,2015
[9] Parrish AR, Oliver S, Jenkins D, Ruscio B, Green
JB, Colenda CA. (2005).Short medical school course
on responding to bioterrorism and other disasters.
Academic Medicine, 80(9): 820-823.
[10] Jovasevic-Stojanovic M, Stojanovic B, Ristanovic E.
Critical Issuse on Efficiency of personal protection
against
CBR
Agents
of
Nanoparticles
Dimensions.Proceedings of The sixth International
Chemical and Biological Medical Treatment
Symposium, Spiez, Switzerland, 2006; CD
[11] Kortepeter M, Christopher G, Cieslak T, et al.
Medical Management of Biological Casualties
Handbook. 4th ed. Fort Detrick: United States Army
Medical Research Institute of Infectious Diseases
(USAMRIID), 2001.

Picture 5. Protective clothes for work with potential BW


(first-responders)
Protection measures for a unit or group primarily rely on
weather monitoring,remote probe monitoring for
biological agents, and central command data acquisition,
transfer, and analysis. Large-scale decontamination
measures for barracks, vehicles,and other equipment are
also considered as collective protection. [11]

4. CONCLUSION
Recent achivements and progress in genetic engineering
and biotechnology make possible the development of new

516

ESTIMATION OF SAFT AND PC-SAFT EOS PARAMETERS FOR NHEPTANE UNDER HIGH PRESSURE CONDITIONS
JOVANA ILI
Institute of Chemistry, Technology and Metalurgy, University of Belgrade, Belgrade, Serbia, jilic@tmf.bg.ac.rs
MIRKO STIJEPOVI
Institute of Chemistry, Technology and Metalurgy, University of Belgrade, Belgrade, Serbia
ALEKSANDAR GRUJI
Institute of Chemistry, Technology and Metalurgy, University of Belgrade, Belgrade, Serbia
JASNA STAJI-TROI
Institute of Chemistry, Technology and Metalurgy, University of Belgrade, Belgrade, Serbia
GORICA IVANI
Faculty of Technology and Metallurgy, University of Belgrade, Belgrade, Serbia
MIRJANA KIJEVANIN
Faculty of Technology and Metallurgy, University of Belgrade, Belgrade, Serbia

Abstract:Thermodynamic models for calculations of thermodynamic properties at different operating conditions are of the
great importance forprocess industry. In this work, the SAFT and PC-SAFT equations of state (EOS) were employed to
estimate n-heptane densities under high pressure conditions. The new sets of parameters for the SAFT and PC-SAFT EOS
are estimated using densities of n-heptane measured in the extensive ranges of temperature and pressure (288.15-413.15K
and 0.1-60MPa, respectively). In order to comparecalculated densities with the selected literature data, the absolute
average percentage deviation (AAD) of 0.155 % and 0.075 %, the percentage maximum deviation (MD) of 1.389 % and
0.324 %, the average percentage deviation (Bias) -0.001 % and 0 and the standard deviation () of 0.001kgm-3 for SAFT
and PC-SAFT have been obtained, respectively.
Keywords: parameter estimation, n-heptane, density, SAFT, PC-SAFT.
workers [8]. It represents very successful method for
predicting phase behavior of long chain molecules, both
associating and non-associating [7]. Huang and Radosz first
suggested very useful and successful modification of SAFT
applying dispersion term obtained by Chen and Kreglewski
[9-11]. In the past twenty years, the most used version of
SAFT, PC-SAFT has been developed by Gross and
Sadowski [10]. However, there has been developed a
several variety of SAFT EoS and their differences derived in
the different dispersion terms. Different versions of
described model have been proposed including the original
SAFT [3,8] or simplified SAFT [3,12], the CK-SAFT [3,9],
the LJ-SAFT [3,15,16], the soft-SAFT, the SAFT-VR
[1,15,16] and the PC-SAFT [1,3,10,17]. PC-SAFT is the
most widely used version.

1. INTRODUCTION
Thermodynamic models play an essential role in diverse
industries, in design safe and efficient process and
equipment [1]. From the practical point of view, in the
chemical, polymer and pharmaceutical industries, the most
used models are based on equation of state (EoS) [2]. A
family of classical high-pressure models is cubic EoS, and
the most well-known of them are the van der Waals,
Redlich-Kwong, Soave- Redlich-Kwong, Peng-Robinson
equations [3]. In recent years, significant interest for
engineers represents two models, Statistical Associating
Fluid Theory (SAFT) and Perturbed Chain-Saft (PC-SAFT)
[2]. These models are commercially used for long-chain
molecular fluids and mixture components with different size
and polarity, in order to obtain more accurate
thermodynamic properties [4].

The scope of this work was to estimate SAFT and PC-SAFT


parameters by fitting calculated densities with
experimentally obtained values. Toour knowledge there has
been many studies with n-heptane, but in the mixture with
other components. The experimental values of densities for
selected component has been evaluated at different
operating conditions (0.031-0.26 MPa) [9,10,17].

Statistical Associating Fluid Theory (SAFT), theoretically


derived model, is a one of the most important association
theories based on perturbation categories [5]. Equations
based on Werthiem perturbation theory [6,7] is implemented
into a useful form and proposed by Chapman and co517

OTEH2016

ESTIMATIONOFSAFTANDPCSAFTEOSPARAMETERSFORNHEPTANUNDERHIGHPRESSURECONDITIONS

2. THEORETICAL PART

Qi = 12

In this investigation, initial guesses for

parameters, k(0) are taken from literature, for SAFT [9] and
PC-SAFT[10]. Parameters estimation is carried out from
densities data according literature [20]. Also objective
function is established combining developed equation (1)
and standard thermodynamic relation [21]:

The Helmholtz free energy represents the starting point for


calculating thermophysical properties of interest [10]. The
residual Helmholtz energy is expressed by following
equation [3, 5]:
a res = a hs + a disp + a chain + a assoc

(1)

P ( a )T = 0
V
a res = f ( , m , v , u 0 / k ) SAFT
i
V
res
a = f ( , m , , / k ) PC SAFT
i
V
res

whereahs denotes hard-sphere reference and adisp is


dispersion terms, a chain and a assoc denote contributions
from chain formation and from association, respectively
[3,5]. The chain and association terms are essentially
unchanged in almost all SAFT EoS variants [3].
In our case association contribution is not important and can
be neglected, because selected componentis non-associated.
The segment term differs in different SAFT variants such as
for example PC-SAFT [5]. The elementary difference
between these models is the dispersion term, where hardsphere chains form at first, and then add a chain dispersion
term [3].

Obtained function (S(k)) is transformed into polynomial


form and expressed by equation:
N

S( k ) =

Based on the chosen least squares trust region method


described objective function is minimized applying multi
start in order to achieve more efficient parameters
estimation. Iterative procedure continues until convergence
to the optimal parameter values is achieved.
Once the unknown sets of parameters are evaluated, it is
very important to carry out some additional calculations to
establish estimates of the standard error in the parameters
[19]. Applying the described optimization method to search
the best parameter values, the model equations are
linearized. So, our parameters data have linear least squares
characteristics. In case of linear least squares parameters are
independent of initial assumed data.
N

A=

( kf )

i =1

1 ( f )
2 k

(5)

Adenotes pxp dimensional matrix, A* is matrix A evaluated


at k*. The joint confidence region (1-) 100% for the
parameter vector k is defined and described by next equation
[19]:

In this work least squares trust region optimization method


has been applied.
In parameter estimation, optimization problem is defined to
find values of parameter vector that minimize objective
function given by equation (1) [19]:

k k * A* k k * = p2 Fp, Nm p

(4)

In order to obtain new values of densities, equation (4) is


solved for (V=1/) and results compared with initial
assumptions of densities.

There are diverse goals unconstrained optimization methods


that can be applied for parameters estimation [18]. Iterative
methods for nonlinear optimization such as least squares
trust region (LSQR) and sequential quadratic
programming(SQP) are used to minimize the objective
function. The Levenberg-Marquardt algorithm, known as
the damped least squares method is a standard technique
commonly used to solve nonlinear least squares problems.
Unfortunately, this optimization technique finds only local
minimum and do not guarantee that the global minimum has
been found [18].

Where k = [ k1 , k 2 ,..., k p ]

where k is Boltzmann constant.

3. PARAMETER ESTIMATION

e Q e

( lit cal ) 2

i =1

The Helmholtz free energy is parameters dependent. It is


defined by three parameters, the chain length number or
number of segments (mi), the segment diameter (), and the
segment energy parameter () [5]. However, it is necessary
for calculation process to know the segment number (mi),
the segment volume (v), the segment energy (uo/k) for
SAFT and the segment number (mi), the segment diameter
(), the segment energy (/k) for PC-SAFT. Above
mentioned values have been estimated in this paper and
based on thatdensities values are fitted.

S (k ) =

(3)

(6)

i =1

is the probability level in Fishers F-distribution and


Fp, Nm p is obtained from the F-distribution tables, Nm-p

the p-dimensional vector of

present degrees of freedom, where Nm denote total number


of measurements and p the number of unknown parameters.

T
i

i i

(2)

parameters, e = [ e1 , e2 ,..., em ] the m-dimensional vector of


T

Further, the corresponding (1-) 100% marginal confidence


interval for all parameter lead to following equation:

T
residuals where ei = lit cal 12 lit cal and Qi is

518

ESTIMATIONOFSAFTANDPCSAFTEOSPARAMETERSFORNHEPTANUNDERHIGHPRESSURECONDITIONS

ki* t 2 ki ki ki* + t 2 ki

obtain the inference regions for determined parameters


using the Gauss-Newton method for linear approximation.
The standard errors of parameters are presented in Table 1.
Results for SAFT and PC SAFT are listed in Table 1.
Obtained values of parameters do not depend from initial
assumption [9,10].

(7)

t 2 is obtained from tables of Students T-distribution. In


order to obtain the standard error ( ki ) of parameterki has
been applied relation [19]:

ki =

{ A }
* 1

The results represent good agreement with assumed values.


Thus, optimized parameters are used in process of density
calculation.

(8)

ii

Densities of investigated pure substance are determined in


broad ranges of temperature and pressure between 288.15
413.15 K and 0.1 60 MPa. These calculated values are
showed in Table 2 for SAFT and PC SAFT model.

In our approach there are lot of data points, so we can take


as the 95% confidence interval twice the standard error.
The sets of parameters have been determinedby fitting
experimental density values given in Ref. [20]. However,
the densities are calculated in the same time the parameters
estimation. The equations [22] of the absolute average
percentage deviation (AAD), the percentage maximum
deviation (MD), the average percentage deviation (Bias) and
standard deviation ()are used in order to compare obtained
densities with values that were found in literature [20].

The results of calculated densities are compared with data


listed in corresponding literature [20]. The obtained
deviations were: AAD=0.155%, MD=1.389%, Bias=0.001% and =0.001 kgm-3for SAFT and AAD=0.075%,
MD=0.324%, Bias=-0% and =0.001 kgm-3 for PC-SAFT.
These values show very good agreement with supporting
literature. The largest deviations are occurred predicting
density on pressure around atmospheric values.

4. RESULTS AND DISCUSION

5. CONCLUSION

Reported densities are measured under high pressure


conditions using Anton Paar HP density-measuring cell
[23], with the expanded uncertainty (k=2) of 0.8 kgm-3 for
density measurements in the temperature interval 288.15363.15 K and 1.7 kgm-3 at temperatures 373.15-413.15 K,
and are used in optimization process. Applying trust region
method, the new sets of parameters for both models are
evaluated.

SAFT and PC SAFT parameters for selected pure and


non-associated substance are determined. Trust region
method with applying multi start is used in optimization
process. In order to obtain more accurate values of
parameters, the Gauss-Newton method for solving nonlinear
least squares problem has been used. These computational
steps are very helpful to researcher to provide with quality
of overall fit and inform how trustworthy the parameter
estimates are [19].

Table 1: SAFT and PC SAFT parameters for n-heptane

mi [-]
v [ml/mol]

Calculated
values
4.415
17.955

u0/k [K]

278.237

mi [-]
[]
i/k [K]

Calculated
values
2.184
3.755
290.088

Saft
Standard
error
0.055
0.238
1.540
PC-Saft
Standard
error
0.032
0.019
2.153

OTEH2016

204.61

Densities under supercritical conditions, in the interval of


temperature 288.15 413.15 K and pressure range up to 60
MPa are achieved. However, predicted densities with
optimal parameters presented in this work deviate from the
literature data with AAD=0.155%, MD=1.389%, Bias=0.001% and =0.001 kgm-3for SAFT and AAD=0.075%,
MD=0.324%, Bias=-0% and =0.001 kgm-3 for PC-SAFT.

Literature
values[9]
3.4831
3.8049
238.4

The aim of this study was fitting the density and optimizing
parameters of pure, non-associated compound under high
pressure conditions and their comparison with literature
data. This paper provides more accurate results than that
reported by other authors. Thus, considerably values of pure
component could be used to model its mixtures.

Literature
values[9]
5.391
12.282

Optimized parameters and accurate density data are of


practical importance for process industry, because they
could be used to determine various thermodynamic
properties such as enthalpy, entropy, heat capacity [24,25].

The initial values for SAFT are specified by Radosz [9] for
n heptane. However, PC SAFT parameters values are
used from Gross and Sadowski [10]. Also, it is needed to

519

ESTIMATIONOFSAFTANDPCSAFTEOSPARAMETERSFORNHEPTANUNDERHIGHPRESSURECONDITIONS

520

OTEH2016

ESTIMATIONOFSAFTANDPCSAFTEOSPARAMETERSFORNHEPTANUNDERHIGHPRESSURECONDITIONS

OTEH2016

Pure Alkanes, Alkanols, and Water, Industrial and


Engineering Chemistry Research, 35 (12) (1996) 4727
4737
Thomas KraskaandKeith E. Gubbins, Phase Equilibria
Calculations with a Modified SAFT Equation of State. 2.
Binary Mixtures of n-Alkanes, 1-Alkanols, and Water,
Industrial and Engineering Chemistry Research, 35 (12)
(1996) 47384746
Alejandro Gil-Villegas, Amparo Galindo, Paul J.
Whitehead, Stuart J. Mills, George Jackson and Andrew N.
Burgess, Statistical associating fluid theory for chain
molecules with attractive potentials of variable range, The
Journal of Chemical Physics, 106(1997) 4168
Clare McCabe, Alejandro Gil-Villegas, George Jackson,
Gibbs ensemble computer simulation and SAFT-VR theory
of non-conformal square-well monomerdimer mixtures,
Chemical Physics Letters 303 (1999) 2736
AmraTihic, Georgios M. Kontogeorgis, Nicolas von Solms,
Michael L. Michelsen, Applications of the simplified
perturbed-chain SAFT equation of state using an extended
parameter table, Fluid Phase Equilibria, 248 (2006) 29
B. Behzadi, C. Ghotbi, A. Galindo, Application ofthe simplex
simulated annealing technique to nonlinear parameter
optimization for the SAFT-VR equation of state, Chemical
Engineering Science, 60 (2005) 6607 - 6621
Peter Englezos, Nicolas Kalogerakis, Applied parameter
estimation for chemical engineers, Taylor & Francis Group,
2001
Ali A. Abdussalam, Gorica R. Ivani, Ivona R. Radovi,
MirjanaLj.
Kijevanin,
Densities
and
derived
thermodynamic properties for the(n-heptane + n-octane),
(n-heptane + ethanol) and (n-octane + ethanol) systems at
high pressures, The Journal of Chemical Thermodynamics,
100 (2016) 8999
Stanley M. Walas, Phase Equilibria in Chemical
Engineering, Butterworth-Heinemann, 1985
Gorica R. Ivani, Aleksandar . Tasi, Ivona R. Radovi,
Bojan D. Djordjevi, Slobodan P. erbanovi and
MirjanaLj. Kijevanin, Modeling of density and calculations
of derived volumetric properties for n hexane, toluene
anddichloromethane at pressures 0.1 60 MPa and
tmperatures 288.15 413.15 K, Journal of the Serbian
Chemical Society, 80 (2015) 1423-1433
Gorica R. Ivani, Aleksandar . Tasi, Ivona R. Radovi,
Bojan D. Djordjevi, Slobodan P. erbanovi and
MirjanaLj. Kijevanin, Anapparatus proposed for density
measurements in compressedliquid regions at pressures of
0.160 MPa and temperatures of288.15413.15 K, Journal
of the Serbian Chemical Society,80 (8) (2015) 10731085
Nikolaos I. Diamantonis and Ioannis G. Economou,
Evaluation of SAFT and PC-SAFT EoS for the calculation
of thermodynamic derivative properties of fluids related to
carbon capture and sequestration, Energy and Fuels 25(7)
(2011) 3334-3343
Xiaodong Liang, BjrnMaribo-Mogensen, Kaj Thomsen,
Wei Yan, and Georgios M. Kontogeorgis, Approach to
Improve Speed of Sound Calculation within PC-SAFT
Framework, Industrial and Engineering Chemistry
Research, 51 (2012) 149031491.

ACKNOWLEDGEMENTS
This work is supported by Ministryof Education, Science
and Technological Development of the Republic of Serbia,
under the projects TR 37001, TR 34011 and OI 172063.
References
Fuenzalida,M., Valenzuela,J.C., Correa,J.R.P.: Improved
estimation of PC-SAFT equation of state parameters using a
multi-objective variable-weight cost function, Fluid Phase
Equilibria, In Press, Accepted Manuscript, Available online
18. july 2016.
Diamantonis,N.I.,
Boulougouris,G.C.,
Mansoor,E.,
Tsangaris,D.M., Economou,I.G.: Evaluation of Cubic,
SAFT, and PC-SAFT Equations of State for the
VaporLiquid Equilibrium Modeling of CO2 Mixtures with
Other Gases", Industrial and Engineering Chemistry
Research, 52 (10) (2013) 3933-3942
Georgios M. Kontogeorgis, Georgios K. Folas,
Thermodynamic Models for Industrial Applications: From
Classical and Advanced Mixing Rules to Association
Theories, John Wiley & Sons, Wiltshire, UK, 2010
Kiselev,S.B., Ely,J.F.: Crossover SAFT Equation of State:
Application for Normal Alkanes, Industrial and Engineering
Chemistry Research, 38 (1999) 4993-5004
Senol,I.: Perturbed-Chain Statistical Association Fluid
Theory (PC-SAFT) Parameters for Propane, Ethylene, and
Hydrogen under Supercritical Conditions, International
Journal of Chemical, Molecular, Nuclear, Materials and
Metallurgical Engineering, World Academy of Science,
Engineering and Technology, 5(11) (2011)
Nuno Pedrosa, Lourdes F. Vega, Joao A. P. Coutinho and
Isabel M. Marrucho, Phase Equilibria Calculations of
Polyethylene Solutions from SAFT-Type Equations of State,
Macromolecules 39 (2006)4240-4246
Clare McCabe and Sergei B. Kiselev, Application of
Crossover Theory to the SAFT-VR Equation of State: SAFTVRX for Pure Fluids, Industrial and Engineering Chemistry
Research.43(2004) 2839-2851
Walter G. Chapman, Keith E. Gubbins, George Jackson and
MaciejRadosz, New Reference Equation of State for
Associating Liquids, Industrial and Engineering Chemistry
Research 29 (1990) 1709-1721
Stanley H. Huang and Maciej Radosz, Equation of State for
Small, Large, Polydisperse, and Associating Molecules,
Industrial and Engineering Chemistry Research, 29 (1990)
2284-2294
Joachim Gross and Gabriele Sadowski, Perturbed-Chain
SAFT: An Equation of State Based on a Perturbation Theory
for Chain Molecules, Industrial and Engineering Chemistry
Research, 40 (2001) 1244-1260
Stephen S. Chen, AleksanderKreglewski, Ber. Bunsen-Ges.
A. Application of the Augmented van der Waals Theory of
Fluids. I. Pure Fluids, John Wiley & Sons, 81(10) 1977
Yuan-Hao Fu, Stanley I. Sandler, A Simplified SAFT
Equation of State for Associating Compounds and Mixtures,
Industrial and Engineering Chemistry Research, 34 (5)
(1995) 18971909
Thomas KraskaandKeith E. Gubbins, Phase Equilibria
Calculations with a Modified SAFT Equation of State. 1.
521

THE APPLICATION OF IR THERMOGRAPHY FOR THE CRACKS


DETECTION IN THE COMPOSITE STRUCTURES USED IN AVIATION
STEVAN JOVII
Technical Test Center, Belgrade, stevanjovicic@gmail.com
IVANA KOSTI
Technical Test Center, Belgrade, kostic.ici@gmail.com
ZORAN ILI
Technical Test Center, Belgrade, zoranilic_65@yahoo.com
LJUBIA TOMI
Military Technical Institute, Belgrade, ljubisa.tomic@gmail.com
ALEKSANDAR KOVAEVI
Technical Test Center, Belgrade, aleksandarkovacevic1962@yahoo.com

Abstract: This paper presents the non-destructive diagnostic techniques based on infrared thermography for the
detection of cracks that might have a negative influence in composite materials used in aviation. Research is performed
on a real aeroplane component. For this research is used the propeller cone. The obtained results indicate the potential
of infrared thermography methods for the detection of water in aerospace components which is important because its
presence even in small quantities may cause defects in these elements. This method is based on the fact that water
changes temperature slower than the composite materials.
Keywords: infrared thermography, water, cone propeller.

conditions are exposure to overspeed conditions, object


strikes, engine problems, vibration dampers, etc. The
most common mechanical damage takes the form of
sharp-edged nicks and scratches created by the
displacement of material from the blade surface and
corrosion that forms pits and other defects in the surface.
This small-scale damage tends to concentrate stress in the
affected area and eventually, these areas may develop and
propagate cracks. Corrosion is a serious cost and safety
concern to operators of aircraft made of metallic
structures. All nicks are potential crack starters and
erosion is the loss of material from surface by the action
of small particles such as sand or water.

1. INTRODUCTION
The design of aircraft structure is a complex issue. In
order to design a structure, which is able to fulfill all
requirements it is necessary to consider a large range of
design criteria. Parts of the aircraft structure are today
designed according to a damage tolerance design
philosophy. Also, the interval between detection and the
critical moment has to be compatible with the time for
inspection [1].
Composite material is in use more often. It is materials
made from two or more materials with different physical
or chemical properties to give the composite unique
properties which is much better than its separated. They
act together as one. The properties of the composite
material are superior to the properties of the individual
materials. The development of composite materials in the
aircraft industry results in good durability properties,
corrosion resistance, low weight and low cost. There are
tendencies of development non-destructive technique
focused on the detection of very small discontinuities.

Cracks are cause for its immediate removal and detailed


inspection. Dents can be harmful, depending on their size,
location and configuration [2].

2. EXPERIMENT SET UP
Passive thermography is observing temperature states and
temperature changes on surfaces without any interference.
Active thermography offers different inspection methods,
stimulates thermal processes and then records surface
temperature distribution and temperature processes.

A cone propeller is one of the most highly stressed


components on an aircraft. A properly maintained
propeller is designed to perform normally under
corrosion, stone nicks, ground strikes and additional
unintended stress. Additional causes of overstress

Picture 1 shows a block diagram of the thermal inspection


system that has been developed. The thermal images are

522

THEAPPLICATIONOFIRTHERMOGRAPHYFORTHECRACKSDETECTIONINTHECOMPOSITESTRUCTURESUSEDINAVIATION

produced with a commercial infrared FLIR SC 620


camera. The minimum detectable temperature difference

OTEH2016

familiar with this problem and their engineers check plane


structure after the regular flights. Idea is to use impact of
water in composite material.

is 20 mK when operating the detector in the 8 to 12


micrometer range. The camera produces images at the
rate of 30, 60 or 120 per second.

Picture 3. Cone propeller presented by passive


thermography

Picture 1. Experimental setup of thermal imaging system

So it has to be used active thermography. Two ways have


been used in this work. Experimental schedule assumed
the following possibilities:

Model for testing is degraded composite cone propeller


after flights.

Water impact
Temperature difference (thermal activation)
Thermovision registration of surface temperature
distribution

4. THERMOGRAPHIC INSPECTION
Before temperature activation the cone propeller is in
thermal equilibrium with environment (temperature
around 20 C). The active method applied in this study
was done in the following way. The first, cone propeller
has been heated by Makita HG6020 Heat GUN KIT for 5
minutes. Temperature is 100 C and it is controlled by
heat guns thermometer. During the test exposition to
heater begins heat exchange with environment. After that
cone propeller has been filled with water temperature
about 10 C. After this procedure, cone propeller is ready
for inspection by thermal camera because temperature
difference between these two conditions is more than
clear.

Picture 2. Test model


The aim is to present how temperature differences taking
place during thermovision testing.

It is important to note that differences in emissivity due to


paints have not been considered.

3. DIFFERENT APPROACHES
Work concentrates on the factors which affect
detectability of damage which applied to cone propeller.
Attention has been paid to required inspection procedures.
The capability of IR technique has to be use in right way.
IR technique requires to identify the problem properly.
Using passive thermography, it could not be identify the
cracks which could be seen visual [3].
The first approach involves using lamp what is
conventional method. But energy it was not enough to
make thermal reaction on cone propeller. Composite
material doesnt show difference recording by thermal
camera. It has to apply a different approach. The second
approach to use a negative impact of water in aircraft
structural components is well known. Aircraft industry is

Picture 4. Cone propeller after active treatment


523

THEAPPLICATIONOFIRTHERMOGRAPHYFORTHECRACKSDETECTIONINTHECOMPOSITESTRUCTURESUSEDINAVIATION

Picture 4 presents thermogram by thermal camera. It can


be seen many dots. Some of them could be seen by visual
inspection but not all of them.

OTEH2016

make cracks on composite materials. Method must be


standardized. Proposal for improvement must formulate
dimension of sample, type of sample, temperature
treatment and time of treatment. IR inspection techniques,
with further improvement and in future it will upgrade
inspection reliability.

Using this non-standard method, it has been seen many


parts caused by corrosion. The damage can be detected
under the paint of the tested cone propeller. Some cracks
are beneath surface. But thermal camera can detect cracks
about 4 mm beneath surface as can be seen in this
experiment.

References
[1] Assler,H., Telgkamp,J.: Design of aircraft structures
under special consideration of NDT, 2004, Germany
[2] Allen,J., Ballough,J.: Aircraft propeller maintance,
2005, Flight Standards Service, Washington, D.C.
[3] Tomic,Lj., Karkalic,R.: Analysis of camouflage
uniform by IC thermography, ETRAN

5. CONCLUSION
The purpose of the work is to define way of searching
cracks using non-destructive method. Applying this IR
inspection allows the detection of small cracks. This nonstandard method is based on knowledge that water can

524

THERMAL AND CAMOUFLAGE PROPERTIES OF Rosalia alpina


LONGHORN BEETLE WITH STRUCTURAL COLORATION
IVANA KOSTI
Technical Test Center, Belgrade, kostic.ici@gmail.com
DANICA PAVLOVI
Photonics Center Institute of Physics, University of Belgrade, danica.pavlovic@ipb.ac.rs
VLADIMIR LAZOVI
Photonics Center Institute of Physics, University of Belgrade, vladimir.lazovic@ipb.ac.rs
DARKO VASILJEVI
Photonics Center Institute of Physics, University of Belgrade, darko@ ipb.ac.rs
DEJAN STOJANOVI
Fruka Gora National Park, Sremska Kamenica, dejanstojanovic021@yahoo.co.uk
DRAGAN KNEEVI
Military Technical Institute, Belgrade, dragankn@gmail.com
LJUBIA TOMI
Military Technical Institute, Belgrade, ljubisa.tomic@gmail.com
GORAN DIKI
Military Academy, University of Defence in Belgrade, goran.dikic@mod.gov.rs
DEJAN PANTELI
Photonics Center Institute of Physics, University of Belgrade, pantelic@ipb.ac.rs

Abstract: Rosalia alpina is a longhorn beetle possessing distinctive gray body with several black spots. They serve as a
camouflage within its environment (beech forest) and we suppose that insect also uses them to control the body
temperature. We have studied the optical properties of this particular insect, ranging from the visible to far infrared
part of the spectrum. Optical analysis has shown strong absorption in the visible, while thermal camera (operating in
the spectral range from 7.5 to 13 m) has shown quite uniform emissivity of the whole body. Numerical ray tracing was
used to explain the exact optical mechanism of strong absorption of black spots. Possible military applications of the
natural camouflage and absorption mechanism are outlined.
Keywords: Infrared imaging, natural photonics, camouflage.
antenna [2]. Such coloration serves as a good camouflage
with their preferred habitat, the European Beech [3].

1. INTRODUCTION

The largest part of the beetles body is covered with a


dense tomentum, consisting of very fine, light blue, blue
gray and dark blue hairs. The black spots on the elytra and
pronotum are also covered with dense black hairs which
give the spots their velvety appearance [2,4]. Picture 1
shows the photography of Rosalia alpina (test sample).

Coloration has multiple purposes in the living world: to


hide, attract or warn. It can be also used for heat energy
exchange with the environment [1]. Thermoregulation can
be improved using natural photonic processes. In that
respect, we have analyzed antennas and elytra (modified
forewings of Coleoptera, which are used as a hard shield
for their body) of Rosalia alpina (Linnaeus, 1758) (see
photograph in Picture 1).

Natural photonic structures have the main purpose of


producing colors that would have been impossible to
generate by the pigments alone. In the insect world this is
particularly true for blue colors (generated by pepper-pot
structures found in many Polyommatinae [5]) and green
colors (produced by chiral photonic structures in
Calophrys rubi (Linnaeus, 1758), [6]).

This is a large longhorn beetle (family Cerambycidae)


with flat, blue-gray elytra with large, dominating black
spots. It is 15-38 mm long, with the long antennas and
striking black tufts of hair on the central segments of the

525

THERMALANDCAMOUFLAGEPROPERTIESOFROSALIAALPINALONGHORNBEETLEWITHSTRUCTURALCOLORATION

OTEH2016

Picture 1. The potography of Rosalia alpina.


Sometimes photonic structure enhances the pigment color
as observed in some snakes [7] or butterflies [8]. Micro
and nano-structures localize light, and increase the
average path length within the structure thus increasing
absorption. The biological purpose of such structures
might be the camouflage or thermoregulation, as in
Lycaenid butterflies [9].

Picture 3. The hairs in gray zones of Rosalia alpina.

More advanced structures have dual purpose as in


Saharan silver ant (Cataglyphis bombycina (Roger, 1859)
[1]): to reflect maximum amount of the visible light, and
to simultaneously dissipate infrared radiation directly into
the atmospheric window at mid-infrared. This enables
insect to efficiently regulate its body temperature in very
hostile desert environment.
Four prominent black spots on the elytra of Rosalia
alpina have attracted our attention. We have found, that
the light absorption is not the sole consequence of dark
pigments (most probably melanin), but that it is strongly
influenced by the underlying structure [10]. We have been
using Scanning Electron Microsope to examine the
structure of Rosalia alpina hairs within the black spot
(Picture 2). In adjacent grey zones hairs have completely
different structure, as shown in Picture 3.

Picture 4. The black spot area of Rosalia alpina


surrounded by hairs with differenrt structure.
Materials developed by nature during the evolution, have
a significant impact on search for artificial materials
having useful absorptive properties, especially for solar
energy collection, thermal energy dissipation and
camouflage.
Bearing in mind possible military application we have
been examining cooling process in the area of black spots
of Rosalia alpine. The first results have been obtained
using thermographic analysis based on pulse thermogaphy
[10]. This method involves the analysis of images that
have been recorded by an infrared (IR) camera irradiating
the test sample with infrared radiation. We have found
that the light absorption is not the sole consequence of
dark pigments (most probably melanin), but that it is
strongly influenced by the underlying structure.

Picture 2. The hairs in black spots area of Rosalia alpina.


Both structures can be viewed, in the same scale, in
picture 4. The different structure of hairs can be very easy
notified in this picture.

Here we analyze spectral absorption of black spots using


several laser wavelengths.
526

THERMALANDCAMOUFLAGEPROPERTIESOFROSALIAALPINALONGHORNBEETLEWITHSTRUCTURALCOLORATION

OTEH2016

Energy emitted by laser, during irradiation of black spot,


was 49.632 mJ (red light), 5.73 mJ (green light) and
176.305 mJ (blue light). It is followed by temperature
increase of 44.226 C (in case of red light), 23.512 C (in
case of green light) and 4.531 C (in the case of blue
light). It can be noticed that approximately 10 times less
energy had been emitted in case of green light but
temperature increase was only 2 times less than in case of
red light. In addition, 3.5 times more energy had been
emitted in case of blue light but temperature increase was
nearly 10 times less than in case of red light.

2. EXPERIMENTAL SETUP
The test equipment comprised a set of laser pointers, a
thermal camera and a personal computer (PC), which
recorded digital data in real time. The surface of test
sample (Rosalia alpina) was heated using the red, green
and blue laser pointers (650 nm, 532 nm and 405 nm),
positioned at a distance of 50 cm from the sample. Picture
5 shows the experimental setup.

Picture 6 shows a thermal image during the experiments


with red laser in case of frame number 700. This frame is
chosen because the temperature difference between
maximal and minimal temperature is small enough that
both, details of the target and the background can be
visible.

Picture 5. The experimental setup.


Cooling of the test sample, previously heated by means of
a short laser pulse, was monitored using a commercial
thermal camera "FLIR SC620", operating in the spectral
range from 7.5 to 13 m with FLIR T197189 macro lens.
This type of camera has a focal plane matrix of 640x480
consisting of semiconductor detector (Vanadium Oxide VOx). Each detector measures the intensity of infrared
radiation. These values can be represented on a monitor as
a thermal image coded in shades of gray or in color and
can be converted to temperature values using the
appropriate table.

Picture 6. Thermal immage recorded during the 700th


frame in case when red laser has been pointed at black
spot.
Picture 7 shows a typical temperature change during the
experiments.

In order to obtain optimal experimental results zoom has


been set manually. After that, the all other parameters
(ambient temperature, emissivity, the distance of the
object from the thermal imaging camera, humidity, etc)
have been found. Measurements, presented in this paper,
have been carried out at short distances, (about 50 cm)
and are not affected by atmospheric absorption.
Thermal imaging camera allows conversion of spatially
inhomogeneous distribution of radiation flux of scene,
due to differences in the distribution of temperature
and/or emissivity, in the visible image. The right choice
of measuring geometry and correct interpretation of the
results is based on the knowledge of the spatial and
temperature resolution (sensitivity) of thermal imaging
cameras.

Picture 7. Temperature change in case when red laser had


been pointed at black spot.

3. EXPERIMENTAL RESULTS

Here we try to establish the dominant cooling mechanism:


convection, conduction or radiation. We have started with
analysis based on the Newtons law of cooling.

Experiments have been organized in two ways. In the first


case laser has been pointed to the black spot. In the
second it has been used to irradiate the gray area near the
black spot. During the short time (several seconds) laser
pulse has been heating the irradiated place. Complete
process has been recorded by IR camera and analyzed
later using MATLAB software.

This law states that the rate of heat loss of a body is


proportional to the difference in temperatures between the
body and its surroundings. This means that the heat
transfer coefficient, which mediates between heat losses
and temperature differences, is a constant. This condition
527

THERMALANDCAMOUFLAGEPROPERTIESOFROSALIAALPINALONGHORNBEETLEWITHSTRUCTURALCOLORATION

is true in thermal conduction, but approximately true in


conditions of convective heat transfer.

During experiments the radiation time of laser has not


controlled precisely. The next experiments will be
organized with possibility for strictly control of radiation
time. Also, with intention to have more comparable
results, more attention will be paid to the positioning of
the laser beam. During the last experiments irradiation of
the object has been realized pointing laser from the hand.
In addition, a couple of lasers with different wavelengths
will be used for better covering of radiation spectra.

Newton's law of colling is described by equation


dT = k (T T )
t
a
dt

(1)

Tt = Ta + (T0 Ta )e kt

(2)

OTEH2016

Whose solution is

In these equation Tt is the temperature at time t and Ta is


the ambient temperature, T0 is the initial temperature of
the body, and k is a constant.
From equation 2 it folows that
ln

(Tt Ta )
= kt
(T0 Ta )

(3)

We can see that appropriate curve, in ideal case of


cooling, should be the straight line. In picture 8 we can
see slightly displacement of real process (blue line)
comparing to the ideal case (red line)

Picture 9. Displacement of real curve comparing to the


curve obtained by the Newtons law of cooling; blue in
position of maximal temperature; red in the same colon
but one raw below, and green in the same colon but two
rows bellow.

4. CONCLUSION
Our experiments are in progress. The results we have up
to now show the existence of different photonic structure
in the area of black spots. It is confirmed by the scanning
electronic microscope but we want to confirm comparing
the results of irradiation of the black spots and area
outside of them.

Picture 8. Natural logarithm of temperature difference in


case of red laser; blue line real; red line in accordance
with the Newtons law of cooling

In this paper we present the preliminary results of thermal


analysis of Rosalia alpina. Insect was irradiated with laser
radiation at three wavelengths, spanning the whole visible
spectrum (405 nm, 532 nm and 650 nm). Significant
departure from the Newtons law of cooling indicates that
the thermal dissipation is regulated by radiation from the
photonic structures. It seems that the structure is
optimized such that it maximizes absorption in the visible
part of the spectrum and simultaneously minimizes
thermal losses due to radiation. More experimental and
theoretical research is needed to better assess the thermal
effects.

Picture 9 shows the temperature difference in case when


laser with red light has been pointed to the black spot. The
blue line represents results obtained at the position of
pixel with maximal temperature obtained during
irradiation. The red line represents the results for the pixel
in the same colon but one row below. Line marked by
green colour represents the results in the same colon two
row below. The similar results has been obtained in case
of laser with the green and blue light.
Obviously the cooling process not coincide ideally with
the Newtons law of cooling. There is a process of heat
conduction in space that surround the irradiated point.
The existence of negative values in case of green curve is
clear evidence of that. Maximal temperature is achieved
later than in case of red curve, after conducting of energy
from the space with higher temperature.

If proven true, the described effect could be used in


construction of military personnel and arms clothes,
which will diminish thermal dissipation through radiation.
The effect could be important from the military point of
view, because it can help the solder to survive in cold
weather, while reducing its thermal trace.

528

THERMALANDCAMOUFLAGEPROPERTIESOFROSALIAALPINALONGHORNBEETLEWITHSTRUCTURALCOLORATION

OTEH2016

Brink,F., Fitz Gerald,J.D., Poladian,L., Large,M.C.J.,


Hyde,S.T.: The chiral structure of porous chitin
within the wing-scales of Callophrys rubi, J. Struct.
Biol. 174, pp. 290-295, 2011.
[7] Spinner,M., Kovalev,A., Gorb,S.N., Westhoff,G.:
Snake velvet black: Hierarchical micro- and
nanostructure enhances dark colouration in Bitis
rhinoceros, Scientific Raports 3, Article number:
1846, 2013.
[8] Vukusic,P.,
Sambles,J.R.,
Lawrence,C.R.
:
Structurally assisted blackness in butterfly scales,
Proc. Roy. Soc. Vol. 271, S237-S239, May, 2004.
[9] Bir,L.P., Blint,Zs., K. Kertsz, Z. Vrtesy, G. I.
Mrk, Z. E. Horvth, Balzs,J., Mhn,D., Kiricsi,I.,
Lousse,V., Vigneron,J.-P.: Role of photonic-crystaltype structures in the thermal regulation of a
Lycaenid butterfly sister species pair, Phys. Rev. E
67, 021907, Feb. 2003.
[10] Dikic,G., Pavlovic,D., Tomic,Lj., Pantelic,D.,
Vasiljivic,D.: The thermographic analysis of
photonic characteristcs of Rosalia Alpina Surfaces,
Symposium IcETRAN, Zlatibor, Serbia, 13.-16. June
2016.

References
[1] Norman Nan Shi, Cheng-Chia Tsai, Fernando
Camino, Gary D.Bernard, Nanfang Yu, Rdiger
Wehner: Keeping cool: Enhanced optical reflection
and heat dissipation in silver ants, Science, Vol. 349,
pp. 298-301, 2015.
[2] Bense,U: Bockkfer, illustrierter Schlssel zu den
Cerambyciden und Vesperiden Europas, Margraf
Verlag, 512 p.,1995.
[3] Starzyk,J.R.: Rosalia alpine (LINNAEUS, 1758),
Nadobnica alpejska.- In: Govacinski, Z. &
Nowacki, J. (eds), Polsak czerwona ksiega zwierzat.
Bezkregowce: 148-149. IOP PAN Krakw, AR
Poznan, 448 p., 2004.
[4] Duelli,P., Wermelinger,B.: Der Alpenbock (Rosalia
alpina), Ein seltener Bockkfer als Flaggschiff-Art,
Eidg.
Forschungsanstalt
WSL,
CH-8903
Birmensdorf, 2005.
[5] Vertesy,Zs., Balint,K., Kertesz,J.P., Vigneron,V.
Lousse,L.P. Biro, Wing scale microstructures and
nanostructures in butterflies natural photonic
crystals, J. Microsc., 224, pp. 108-110, 2006.
[6] Schrder-Turk,G.E., Wickham,S., Averdunk,H.,

529

ISOGEOMETRIC ANALYSIS OF FREE VIBRATION OF ELLIPTICAL


LAMINATED COMPOSITE PLATES USING THIRD ORDER SHEAR
DEFORMATION THEORY
OGNJEN PEKOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, opekovic@mas.bg.ac.rs
SLOBODAN STUPAR
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, sstupar@mas.bg.ac.rs
ALEKSANDAR SIMONOVI
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, asimonovic@mas.bg.ac.rs
TONI IVANOV
University of Belgrade, Faculty of Mechanical Engineering, Belgrade, tivanov@mas.bg.ac.rs

Abstract: In this research paper a isogeometric laminated composite plate finite element formulation based on third order
shear deformation theory is presented. Numerical examples illustrate natural frequencies and free vibration mode shapes of
elliptical laminated composite plates. Obtained numerical results are presented and then compared to other available
numerical results.
Keywords: isogeometric analysis, TSDT, elliptical laminated composite plates.

finite element model (mesh) in order to run analysis. This


procedure can take up to 80% of the total time required for
analysis [2]. Novel method generally known as
Isogeometric analysis (IGA) [2,3] is proposed in order to
integrate geometrical design and numerical analysis.
Isogeometric finite element method uses NURBS basis
functions for approximation of unknown fields, same as
almost every CAD or CAM package. NURBS offer general
mathematical representation of both analytical geometric
objects and freeform geometry. Application of IGA to plate
and shell analysis is presented in [4-8] for isotropic plates
and shells and [9-15] for composite plate and shell analysis.

1. INTRODUCTION
Since the creation of glass fibers in the 30s, first fiberglass
boats in the 40s and their introduction in aircraft industry
(Boeing 707 in the 1950s had 2% of the structure made from
composites) composite materials slowly became ubiquitous
in marine, automotive and aerospace industry. Today,
Boeing 787 Dreamliner is the first airliner with composite
wings and fuselage (50% of all aircraft is composite), and
new Airbus A350XWB is 53% made of composites. The
main advantages of composites are their strength and
lightness which lead to improved fuel efficiency and more
cost-effective products.

In this paper free vibration analysis of elliptical composite


plates based on TSDT of Reddy [1] is presented. Natural
frequencies and mode shapes are calculated and compared
to other solutions.

This increase in composites usage was followed by great


research effort by scientific community. Laminated
composite plates (laminates) generated particularly large
interest because of their industrial applications. In classic
book on the subject [1] different plate theories applied to
laminated plates and shells as well as appropriate analytical
and finite element models and solutions are covered.

2. NURBS PRELIMINARIES
In this section only brief recall of Non-uniform rational BSpline (NURBS) technology is given. Classic textbooks
[16,17] provide more details on the subject.

Analytical solutions to governing differential equations are


generally available only for simple geometries, focus of the
scientific community was the development of computational
methods that can threat complex geometries with
satisfactory accuracy. In this regard, Finite element method
(FEM) became standard tool for treatment of stress analysis
problems. FEM seeks solution to the weak (integral) form of
differential equations through use of low order (mostly
linear or quadratic) polynomial basis functions. One of the
main shortcomings of FEM is the necessity to build new

NURBS are mathematical representations of 1D, 2D or 3D


objects. They are capable of representing analytical shapes
(e.g. conics) as well as free-form shapes with mathematical
exactness and offer easy manipulation and control of shape
and smoothness to the user.
A pth-degree NURBS curve is defined as

530

OTEH2016

ISOGEOMETRICANALYSISOFFREEVIBRATIONOFELLIPTICALLAMINATEDCOMPOSITEPLATESUSINGTHIRDORDERSHEAR
n

C (u ) =

where u0, v0, w0 represent linear displacements of the


midplane,x, y, are the rotations of normals to the midplane
about the y and x-axes respectively and h denotes the total
thickness of the laminate.

N i , p ( u )wi Pi

i =0
n

, aub
i, p

(1)

( u )wi

In-plane strains {xx yy xy}T are given as

i =0

p = 0 + z 1 + z 3 3

where the {Pi} are the control points, the {wi} are the
weights and the {Ni,p(u)} are the pth-degree B-spline basis
functions defined as

1 ui u < ui +1
0 otherwise

(2)

u ui
N
(u )
ui +1 ui i , p 1
ui + p +1 u
+
N
(u )
ui + p +1 ui +1 i +1, p 1

(3)

Ni ,0 ( u ) =

with
0
0
0
0 = { xx
yy
xy
} =
T

Ni, p (u ) =

1 =

on the non-uniform knot vector

U = a,...,
a
,
u
,...,
u
,
b
,...,
b
p
+
1
m

1
N

p +1
p +1

(4)

S ( u, v ) =

N
i =0

y
=

x + y
y
x

(10)

(11)

p = 0 + z 2 2

, 0 u, v < 1 (5)
i, p

(12)

and cross plane components p={ yz xz}T as

N i , p ( u )N j , q ( v )wi , j Pi , j

i =0 j =0
n
m

1
xx
1
yy
1
xy

u0 v0 u0 v0
+
x y y x

x 2 w0
+

2
x

3
x
xx

y 2 w0
3 4

+
3 = yy
= 2

y
y 2
3 3h

xy
x y
2 w0
+
+2

x
xy
y

Multivariate NURBS basis functions are defined through


tensor product. A NURBS surface of degree p in the u
direction and degree q in the v direction is a bivariate
vector-valued piecewise rational function of the form
n

(9)

( u )N j ,q ( v )wi , j

with

j =0

0
0 = yz
=
0
xz

where the{Pi,j} are the control points, the {wi,j} are the
weights and {Ni,p(u)} and {Nj,q(u)} are the pth-degree and
qth-degree B-spline basis functions defined on the
nonuniform knot vectors

U = a,...,
,..., b

a , u p +1 ,..., ur p 1 , bN
p +1
p +1

(6)

V = cN
,..., c , uq +1 ,..., us q +1 , d,...,
d

q +1
q +1

(7)

+ w0
y y

x + w0

2
2 = yz2 = 42
xz h

(14)

+ w0
y y

x + 0

(15)

Constitutive relations between stresses and strains in the kth


lamina in the case of plane stress state, for local coordinate
system of the principle material coordinates (x1,x2,x3) where
x1 is fibre direction, x2 in-plane normal to fibre and x3
normal to lamina plane, are given by

where r=n+p+1 and s=m+q+1.

1( k )
(k )
2
(k )
12 =
(k )
23k
( )
13

3. EQUATIONS OF MOTION
In third order shear deformation theory (TSDT) of Reddy
displacement field is defined as:
w
u ( x, y, z ) = u0 ( x, y ) + z x 42 z 3 x + 0
x

3h
w
v ( x, y, z ) = v0 ( x, y ) + z y 42 z 3 y + 0
y
3h

w ( x, y, z ) = w0 ( x, y )

(13)

(8)

0
0
Q11 Q12 0
Q12 Q22 0
0
0
0
0 Q66 0
0

0
0 Q44 Q45
0
0
0 Q45 Q55
0

(k )

1( k )
(k )
2
(k )
12 (16)
( k )
23k
( )
13

The quantities Qij are called the plane reduced stiffness


components and are given in terms of material properties of
each layer as
531

OTEH2016

ISOGEOMETRICANALYSISOFFREEVIBRATIONOFELLIPTICALLAMINATEDCOMPOSITEPLATESUSINGTHIRDORDERSHEAR

( )
Q11
=

E1( k )

(k )

Q12 =

( )
, Q22
=

E2( k )

(k ) (k )
1 12
21
(k ) (k )
E
12
2
(k ) (k )
1 12
21

(k ) (k )
12 21

(k )

(k )

Q44 = G23 ,

Matrices that relate the stress resultants to the strains are


given as

( )
( )
, Q66
= G12
,
k

(k )

(k )

(17)

A B E

Q55 = G13 .

E1(k), E2(k) are Young moduli, 12(k), 21(k) are Poisson


coefficients and G12(k), G13(k), G32(k) are shear moduli of the
lamina.

0 xx
Q11 Q12 Q16 0
Q
Q
Q
0
0 yy
22
26
12

0 xy
Q16 Q26 Q66 0
0
0
0 Q44 Q45 yz

0
0
0 Q45 Q55 xz

(21)

with

( Aij , Bij , Dij , Eij , Fij , H ij ) =


N

Composite laminates are usually made of several orthotropic


layers of different orientation. In order to express
constitutive relations in referent laminate (x,y,z) coordinate
system (Pic.3.) lamina constitutive relations are transformed
as
xx
yy

xy =
yz

xz

As D s

D = B D F , D s = s
s
E F H
D F

k =1

h/2
h/2

Qij(
N

k)

(1, z , z 2 , z 3 , z 4 , z 6 )dz

( Aijs , Dijs , Fijs ) = h / 2 Qij( k ) (1, z 2 , z 4 )dz; i, j = 4,5


h /2

k =1

and

(18)

uT = u0 v0 w0 x y

where elements of the matrix in eq.(18) are layer planestress-reduced stiffnesses in the laminate coordinate
system[1].

w0
x

w0
y

}.
T

4. ISOGEOMETRIC FINITE ELEMENT


MODEL OF TSDT PLATE
In isogeometric formulation of TSDT plates, the field
variables are inplane displacements, transverse
displacements and rotations at control points.

u = {u0 v0 w0 x y }

(22)

Same NURBS basis functions that are used to describe plate


geometry are used for the interpolation of field variables
nxm

T
s

(23)

where nm is the number of control points (basis functions),


NI are the rational basis functions and qI the degrees of
freedom associated with the control point I

D d + D d = u mud (19)
T
p

I I

I =1

The dynamic form of the principle of virtual work in matrix


form is given by

N q

u=

Picture 1. Local and global coordinate systems of a


laminate

NI 0 0 0 0
0 NI 0 0 0
N I = 0 0 NI 0 0

0 0 0 NI 0
0 0 0 0 N I

where m is defined as

(24)

c1 I 3
0
0
0
0
J1
I0
0
c1 I 3
I0
J1
0
0
0
0
I0
0
0
0
0
0
T

qI = {u0 I v0 I w0 I xI yI }
(25)
c1 I 4
0
0 K2
0
0 (20)
m = J1
J1
K2
c1 I 4
0
0
0
0
2
c I

c
I
c
I
0
0
0
0
The in-plane strains and shear strains are obtained using eq.
1 4
1 6
1 3

2
(3.2),(3.3) and (4.1) as
c1 I 3 0
c1 I 4
c1 I 6
0
0
0

p =

With

0
I

( I 0 , I1 , I 2 , I 3 , I 4 , I 6 ) =

k =1

h/2

h/ 2

( k ) (1, z , z 2 , z 3 , z 4 , z 6 )dz ,

s =

where

532

(26)

c2 z 2 BIS 2 qI

(27)

S0
I

J1 = I1 c1 I 3 and K 2 = I 2 2 c1 I 4 + c12 I 6 .

+ zB1I c1 z 3 BI3 qI

ISOGEOMETRICANALYSISOFFREEVIBRATIONOFELLIPTICALLAMINATEDCOMPOSITEPLATESUSINGTHIRDORDERSHEAR

N, x
B = 0
N , y

laminate with following material properties:

0
0 0 0
N , y 0 0 0
N , x 0 0 0

0 0 0
B = 0 0 0
0 0 0

N, x
0

N, y

0 0 N , xx
B = 0 0 N , yy
0 0 2 N , xy

(28)

N, y

E11=2.45E22
12=0.23

0
N , y
N , x

N, x
0

(30)

N
0

(31)

G12=G13=0.48E22

G23=0.2E22

=1.

This problem was also solved by Chen et al.[18] using


element free Galerkin method (EFG) and classical plate
theory (CPT). For thicker plates (a/h<100) results are
compared with Thai et al. who used plate elements based on
isogeometric formulation of layerwise deformation theory
(LDT) [12] and inverse trigonometric shear deformation
theory (ITSDT) [13]. Obtained results are presented in table
1. and they are in good agreement with other ones. Pic. 4
illustrate first six mode shapes of a moderately thick plate
(a/h=10).

(29)

0
N , y
N , x

and

0 0 N , y 0
BS 0 = BS 2 =
0 0 N , x N

OTEH2016

N,x and N,y denote the first and N,xx, N,yy, N,xy second
derivatives of N with respect to x and y.
For the free vibration analysis dynamic form of the principle
of virtual work reduces to

(K 2 M )q = 0

(32)

K is the global stiffness matrix defined as


T

B0 A
K = B1 B

3
B E
T
B s 0 As
+ s2 s
B D

Picture 2. Elliptical plate geometry and assosiated mesh

0
B E B
D F B1 d
F H B 3

s
s0
D B
d
F s B s 2

(33)

The global mass matrix M is given by

M=

N mN d

T
m

(34)

with
NI
0

Nm = 0

0
0

NI

NI

NI , x

NI

NI

0
0
N I , y (35)

0
0

Picture 3. Control points

5. NUMERICAL EXAMPLE
In this section, the performance of the proposed
isogeometric method is considered.
Dynamic response of elliptical plate with major radius equal
to 5 and minor radius equal to 2.5 was considered (Pic.2.).
We used cubic basis functions in all examples and 1111
control point net (Pic.3.). Boundary conditions are set to
fully clamped. Plate is made of [0/90/0] composite

533

ISOGEOMETRICANALYSISOFFREEVIBRATIONOFELLIPTICALLAMINATEDCOMPOSITEPLATESUSINGTHIRDORDERSHEAR

OTEH2016

Table 1. A non-dimensional frequencies parameter of a [0/90/0] clamped laminated elliptical plate. Frequency

2
parameter is non-dimensionalized as = a

a/h
5
10
20

100

Method
IGA LDT [12]
IGA ITSDT [13]
IGA TSDT (present)
IGA LDT [12]
IGA ITSDT [13]
IGA TSDT (present)
IGA LDT [12]
IGA ITSDT [13]
IGA TSDT (present)
EFG CPT [18]
IGA CPT [12]
IGA LDT [12]
IGA ITSDT [13]
IGA TSDT (present)

) ( h / D )

1
14.157
14.6407
14.4230
17.184
17.4003
17.2878
18.329
18.4305
18.3787
18.81
18.793
18.755
18.8113
18.7910

1/2

3
with D0 = E11h /12(1 12 21 ) [18]

2
19.976
20.7582
20.3827
25.714
26.1718
25.9383
28.280
28.5333
28.4142
29.58
29.428
29.332
29.4718
29.3921

Modes
3
4
27.143
28.862
28.1961
30.4532
27.8591
29.6258
36.982
39.196
37.7157
39.9878
37.5323
39.5681
42.255
44.321
42.6563
44.6033
42.4677
44.4904
44.99
46.72
44.848
46.642
44.792
46.508
44.8216
46.5445
44.8050
46.5328

5
34.955
36.4321
35.5606
49.148
50.3411
49.7803
57.090
57.6329
57.3234
61.34
60.959
60.792
60.9286
60.8958

6
35.162
36.8598
36.2486
50.259
51.2958
50.8396
59.827
60.3551
60.1597
65.14
64.930
65.623
64.7845
65.0696

Picture 4. First six mode shapes of a cubic [0/90/0] clamped laminated elliptical plate with a/h=10

Technological Development of Republic of Serbia through


Technological Development Project No. 35035.

6. CONCLUSION
This paper presented isogeometric formulation of plate
element based on TSDT theory of Reddy. Main focus was
on the implementation of presented method for analysis of
dynamic response of composite plates. As shown through
the example of elliptical plate proposed method can be
successfully used for frequency analysis of composite
plates.

References
[1] Reddy, J.N., Mechanics of laminated composite plates
and shells theory and analysis 2nd Ed., CRC Press,
New York, USA, 2004.
[2] Cottrell J.A., Hughes T.J.R., Bazilevs Y., Isogeometric
Analysis: Toward Integration of CAD and FEA, John
Wiley & Sons, Chichester, 2009.
[3] Hughes, T.J.R., Cottrell, J.A., Bazilevs, Y.,
"Isogeometric analysis: CAD, finite elements,
NURBS, exact geometry and mesh refinement",
Comput. Methods Appl. Mech. Engrg., 194 (39-41)
(2005) 4135-4195.
[4] Kiendl J., Bletzinger K.-U., Linhard J., Wchner R.
Isogeometric shell analysis with KirchhoffLove
elements, Comput. Methods Appl. Mech. Engrg., 198
(49-52) (2009) 3902-3914.
[5] Benson D.J., Bazilevs Y., Hsu M.C., Hughes T.J.R.,
Iso-geometric shell analysis: The ReissnerMindlin

IGA offers many advantages of which most important one is


absence of meshing in classical sense. Since its introduction
10 years ago IGA continues to prove itself as efficient,
accurate and robust method but industrial application
through integration in commercial CAE packages is absent.
This is due to problems such as mesh refinement, treatment
of irregular NURBS geometries (e.g. trimmed surfaces). It is
our opinion that if the scientific interest in the subject
continue to grow, these obstacles will be overcome.

ACKNOWLEDGMENT
This work is supported by the Ministry of Science and
534

ISOGEOMETRICANALYSISOFFREEVIBRATIONOFELLIPTICALLAMINATEDCOMPOSITEPLATESUSINGTHIRDORDERSHEAR

shell, Comput. Methods Appl. Mech. Engrg., 199 (5-8)


(2011) 276-289
[6] Kiendl J., Bazilevs Y., Hsu M.-C., Wchner R.,
Bletzinger K.-U., The bending strip method for
isogeometric anal-ysis of KirchhoffLove shell
structures comprised of multi-ple patches, Comput.
Methods Appl. Mech. Engrg., 199 (37-40) (2010), pp.
2403-2416
[7] Shojaee S., Izadpanah E., Valizadeh N., Kiendl J. Free
vibration analysis of thin plates by using a NURBSbased isogeometric approach, Finite Elements in
Analysis and Design, 61 (2012) 23-34.
[8] Dornisch W., Klinkel S., Simeon B. Isogeometric
ReissnerMindlin shell analysis with exactly
calculated director vectors, Comput. Methods Appl.
Mech. Engrg., 253 (2013) 491-504
[9] Shojaee S., Valizadeh N., Izadpanah E., Bui T., Vu TV., Free vibration and buckling analysis of laminated
composite plates using the NURBS-based isogeometric
finite element method, Composite Structures, 94
(2012) 1677-1693.
[10] Thai C.H., Nguyen-Xuan H., Nguyen-Xuan N., Le TH., Nguyen-Thoi T., Rabczuk T. Static, free vibration,
and buck-ling analysis of laminates composite
Reissner-Mindlin plates using NURBS-based
isogeometric approach, Int. J. Numer. Meth. Engng.,
91 (6) (2012) 571-603.
[11] Nguyen-Xuan H., Thai C., Nguyen-Thoi T.,
Isogeometric finite element analysis of composite
sandwich plates using a higher order shear deformation
theory, Composites Part B: Engineering, 55 (2013)

OTEH2016

558-574.
[12] Thai C., Ferreira A.J.M., Carrera E., Nguyen-Xuan H.
Isogeometric analysis of laminated composite and
sandwich plates using a layerwise deformation theory,
Composite Structures, 104 (2013) 196-214.
[13] Thai C., Ferreira A.J.M., Bordas S.P.A., Rabczuk T.,
Nguyen-Xuan H. Isogeometric analysis of laminated
composite and sandwich plates using a new inverse
trigonometric shear deformation theory, European
Journal of Mechanics / A Solids, 43 (2014) 89-108.
[14] Pekovi O., Stupar S., Simonovi A., Svorcan J.,
Komarov D. Isogeometric bending analysis of
composite plates based on a higher-order shear
deformation theory, Journal of Mechanical Science
and Technology, 28, (8)(2014) 3153-3162
[15] Pekovi O., Stupar S., Simonovi A., Svorcan J.,
Trivkovi S. Free vibration and buckling analysis of
higher order laminated composite plates using the
isogeometric approach, Journal of Theoretical and
Applied Mechanics, 53 (2) (2015) 453-466
[16] Piegl L., Tiller W., The NURBS Book, Springer-Verlag
New York, New York, 1997.
[17] Rogers D. An Introduction to NURBS with Historical
Perspective, Morgan Kaufmann Publishers, San
Francisco, 2001.
[18] Chen X.L. An element free Galerkin method for the
free vibration analysis of composite laminates of
complicated shape, Composite Structures, 59 (2)(2003)
279-289

535

ON THE CORRELATION OF MICROHARDNESS WITH THE FILM


ADHESION FOR SOFT FILM ON HARD SUBSTRATE COMPOSITE
SYSTEM
JELENA LAMOVEC
IChTM-Department of Microelectronic Technologies and Single Crystals, University of Belgrade, Njegoseva 12, 11000
Belgrade, Serbia +381 11 2628587, jejal@nanosys.ihtm.bg.ac.rs
VESNA JOVI
IChTM-Department of Microelectronic Technologies and Single Crystals, University of Belgrade, Njegoseva 12, 11000
Belgrade, Serbia, +381 11 2628587, vjovic@nanosys.ihtm.bg.ac.rs
IVANA MLADENOVI
IChTM-Department of Microelectronic Technologies and Single Crystals, University of Belgrade, Njegoseva 12, 11000
Belgrade, Serbia, +381 11 2628587, ivana@nanosys.ihtm.bg.ac.rs
BOGDAN POPOVI
IChTM-Department of Microelectronic Technologies and Single Crystals, University of Belgrade, Njegoseva 12, 11000
Belgrade, Serbia, +381 11 2628587, bpopovic@nanosys.ihtm.bg.ac.rs
MILO VORKAPI
IChTM-Department of Microelectronic Technologies and Single Crystals, University of Belgrade, Njegoseva 12, 11000
Belgrade, Serbia, +381 11 2628587, worcky@nanosys.ihtm.bg.ac.rs
VESNA RADOJEVI
Faculty of Technology and Metallurgy, University of Belgrade, Karnegijeva 4, 11000 Belgrade, Serbia, +381 11
3303618, vesnar@tmf.bg.ac.rs

Abstract: Composite systems of monolayered electrodeposited Ni and Cu thin films (5-10 m) on monocrystalline Si wafers
and 50 m-thick electrodeposited Ni film as the substrates were fabricated. On the basis of their difference in hardness, these
systems can be thought of as soft film on hard substrate composite systems.
Adhesion of electrodeposited films on different substrates was investigated by Vickers microindentation hardness testing.
Strong adhesion corresponds to extended plastic deformation zone at the film/substrate interface. Interfacial tension effects
contribute to the measured hardness. Composite hardness models of Chicot-Lesage (C-L) and Chen-Gao (C-G) were
applied in order to investigate the influence of adhesion on microhardness test results. When adhesion exists between the
film and the substrate, the critical reduced depth (the ratio between the radius of the plastic zone beneath the indent and the
indentation depth) increases.
Microhardness measurements are useful tool for assessment and quantification of the film/substrate interface strength.
Keywords: Vickers microhardness, adhesion, composite system, composite hardness model, critical reduced depth.
substrate must be considered during the hardness
determination.

1. INTRODUCTION
Complex structures of thin films on substrates are often
used in fabrication of different microelectromechanical
devices. A thin film on a substrate can be considered as a
composite system whose properties depend not only on
particular material properties of the film and the substrate
but also on the composite parameters such as good
adhesion, controlled residual stresses, good corrosion
resistance, etc.

The measured hardness is influenced by a number of factors


such as film thickness, indentation depth, film and substrate
hardness and hardness ratio as well as adhesion. It has been
shown that the microhardness testing can be a useful
technique in assessing the adhesion of thin films to the
substrate [1-7].

2. THEORY OF COMPOSITE HARDNESS


AND ADHESION MODELS

Hardness and adhesion testing are the most important and


widely used techniques for assessing the structural and
mechanical properties of the composite systems. As the
thickness of the film is very small, the influence of the

There is a need for determination of the film hardness solely


from the composite hardness measurements. The composite
536

ONTHECORRELATIONOFMICROHARDNESSWITHTHEFILMADHESIONFORSOFTFILMONHARDSUBSTRATECOMPOSITESYSTEM

and the film hardness are load-dependent. Change of the


composite and the film hardness with load depends on the
composite system structure [5, 8].

value HC and the critical reduced depth b is found as:


Hc = Hs + [( m + 1) t / m b D t m +1 / m b m +1
D m +1 ] ( Hf Hs )

The composite hardness model of Chicot-Lesage (C-L) was


found to be appropriate for experimental data analysis and
film hardness determination [9]. The model is based on the
analogy between the variation of the Young modulus of the
reinforced composites in function of the volume fraction of
particles and that of the composite hardness.

t m +1 / m b m +1 D m+1 0, then is
Hc = Hs + [( m + 1) t / m b D ] ( Hf Hs )

Meyer's law express the variation of the size of the indent in


function of the applied load P. For the particular case of a
film-substrate couple, the evolution of the measured
diagonal and the applied load can be expressed by a similar
relation as is Meyer's:

P = a* d n

OTEH2016

(6)

where Hs and Hf are the hardness of the substrate and of the


film, t is film thickness, D is indentation depth, m is the
power index and b is the critical reduced depth. Critical
reduced depth b has different values for various filmsubstrate systems. Even for the same film-substrate system,
due to the different adhesions, b values are different.

(1)

The variation part of the hardness number with load is


represented by the factor n*. They adopted the following
expression:
t t
f =
d d

= f

where m =

1
n*

(2)

The composite hardness can be expressed by the following


relation:

1
1
+
H C = (1 f ) / 1 / H S + f

H F H S

+ f (H S + f (H F H S ))

(3)
Picture 1. (a) Schematic representation of deformation
associated with indentation in a coated substrate
(weak adhesion), (b) the effect of a strong film/substrate
interface [2]

Hardness of the film is the positive root of the next


equation:
A H F2 + B H F + C = 0,
A = f ( f 1)

with

3. EXPERIMENTAL

B = 2 f 3 + 2 f

1 H S + (1 f ) H C

C = f H C H S + f ( f 1)
2

(4)

3.1. Substrates and film deposition

H S2

The substrates for the electrodeposition of Ni and Cu thin


films were monocrystalline Si wafers with (100) and (111)
orientations and 50m-thick electrodeposited Ni films,
respectively.

The value of m (composite Meyer's index) is calculated by a


linear regression performed on all of the experimental data
obtained for a given film/substrate couple and deduced from
the relation:
ln d = m ln P + b

The plating base for the Si wafers were sputtered layers of


100 Cr and 1000 Ni. Electrodeposition was carried out
using DC galvanostatic mode. Thin and thick Ni coatings
were deposited from proprietary sulphamate electrolyte
consisting of 300g/l Ni(NH2SO3)24H2O, 30g/l NiCl26H2O,
30g/l H3BO3, 1g/l saccharine, and thin Cu films were
deposited from self-made sulphate electrolyte with the
content of 240g/l CuSO45H2O, 60g/l H2SO4. The
temperatures of the deposition processes were maintained at
50C and 20C respectively. According to plating surface,
projected thickness of deposits and cathodic current
efficiency, deposition time was determined.

(5)

With the known value of m, only the hardness of the films


remains to be calculated.
For the evaluation of the adhesion properties of thin films,
Chen-Gao (C-G) method was chosen [2]. This method
introduces the composite hardness as the function of the
critical reduced depth beyond which the material will have
no effect on the measured hardness. They found that a large
value of the critical reduced depth (ratio between the radius
of the plastic zone beneath the indentation and the
indentation depth) corresponds to good adhesion, while low
values indicate poor adhesion of the films, as shown in
Picture 1. The correlation between composite hardness

3.2. Microhardness testing and characterization


The mechanical properties of the composite systems were
characterized using Vickers microhardness tester Leitz,
537

ONTHECORRELATIONOFMICROHARDNESSWITHTHEFILMADHESIONFORSOFTFILMONHARDSUBSTRATECOMPOSITESYSTEM

Kleinharteprufer DURIMET I. Different loads ranging


from 4.9N down to 0.049N were used. Three indentations
were made at each load, yielding six indentation diagonals
measurements, from which the average hardness could be
calculated. Indentation was done at room temperature.

OTEH2016

character.

The examination of samples microstructure by


metallographic microscopy (Carl Zeiss microscope Epival
Interphako) was performed.

4. RESULTS AND DISCUSSION


4.1. Absolute hardness of the substrates
Microhardness testing was performed both on uncoated
substrates and on different composite systems. Model
named PSR (Proportional Specimen Resistance) [10] was
chosen for analyzing the variation of substrate
microhardness with the load.
The average values of the indent diagonal d (m), were
calculated from several independent measurements on every
specimen for different applied loads P (N). The absolute and
composite hardness values H0 and HC were calculated using
the formula:
Hc = 0,01854 P d 2

Picture 2. Variation in composite hardness HC (a) and


film hardness HF according to C-L model (b) with
normalized depth h/t for ED Ni film on Si substrate

(7)

Change of the composite and film hardness with relative


indentation depth for the system of soft ED Cu film on thick
and harder ED Ni film as the substrate is shown in Picture 3.

where 0.01854 is geometrical factor for the Vickers


indenter.
Absolute hardness of the substrates Hs was calculated as
6.49GPa and 8.71GPa for (100) and (111)-oriented singlecrystal Si substrates respectively, and 3.89GPa for thick
electrodeposited Ni film as the substrates [11, 12].

4.2. Composite and film hardness tendency of


various soft film on hard substrate
composite systems
Two different composite systems were investigated: thin
electrodeposited Ni films on Si(100) and Si(111)-oriented
wafers and thin electrodeposited Cu films on thick and
harder electrodeposited Ni films as the substrates. There is
an essential difference between the two above-mentioned
systems; the first system requires the existence of sputtered
adhesion layer because of the hard and brittle semiconductor
Si substrate; the second system consists of two metals with
similar crystallographic structure.
Variation of the composite hardness HC and film hardness
HF of electrodeposited Ni films with different thickness on
single-crystalline Si substrate, with relative indentation
depth h/t, where h is indentation depth and t is total
thickness of the film, is shown in Picture 2.
For low indentation depths (h/t 0.1), the response of the
system is mostly response of the film. For indentation
depths between 0.1 and 1, the hardness response belongs to
the whole composite system and depends on the film and on
the substrate hardness values.
Picture 3. Variations of the composite and film hardness
(according to C-L model) with the relative indentation
depth for electrodeposited Cu films on ED Ni films as the
substrates. Film thickness and deposition current densities
are given in the diagram

The system of ED Ni film on Si(100) substrate has slightly


higher values of composite and film hardness than the
system of ED Ni film on Si(111) substrate. Hard but brittle
single-crystalline Si substrates facilitated cracks formation
and for these systems film hardness has descending
538

ONTHECORRELATIONOFMICROHARDNESSWITHTHEFILMADHESIONFORSOFTFILMONHARDSUBSTRATECOMPOSITESYSTEM

Increase in current density has led to grain size refinement


and hardness increase. In distinction from the system of ED
Ni film on single-crystal Si substrate, both composite and
film hardness have ascending characters. The deformation
hardening of the polycrystalline fine-grained ED Ni
substrate occurred.

OTEH2016

adhesion in both cases.


In Picture 6., values of H = HS-HC, for ED Cu films on
thick ED Ni film as the substrate are plotted vs. ratio
between the film thickness and the indentation diagonal, t/d.
A linear fit of the data was performed and the calculated
values of b are reported in the same picture.

4.3. Composite hardness and adhesion of various


soft film on hard substrate composite
systems
It is observed that adhesion influences the microhardness
values of composite systems and films. Equation (6) was
used to calculate the critical reduced depth b for the system
of thin electrodeposited Ni films on Si(100) and Si(111)
substrates and for the system of thin electrodeposited Cu
films on thick electrodeposited Ni film as the substrate.

Picture 6. Experimental data of hardness difference as a


function of ratio between film thickness and the
indentation diagonal for thin films of Cu electrodeposited
on thick film of Ni. The critical reduced depth b is
reported

All of the systems belong to the soft film on hard


substrate composite system type. The substrate and
composite hardness, HS and HC respectively, were obtained
from the microhardness measurements and reported in the
previous section.

The values of the critical reduced depth b, which correspond to


good adhesion of the film on substrate, are significantly higher
for the ED Cu film on thick ED Ni film as the substrate
composite system, than for the system of ED Ni film on Si
substrates. Adhesion of metallic films to semiconductors is
usually poor even if there is an adhesive layer.

In Pictures 4 and 5, values of H = HS-HC, for ED Ni films


on (100) and (111)oriented Si substrates are plotted vs.
ratio between the film thickness and the indentation
diagonal, t/d. A linear fit of the data was performed and the
calculated values of b are reported in the same figure.

In assessment of the adhesion of particular film/substrate


combination, all possible factors that may influence the
adhesion between film and substrate must be considered and
some of them are deposition technique, critical film
thickness, crystallographic structure and compatibility.
Cross-section images obtained with optical microscopy of
ED Cu/ED Ni contact interfaces are given. Evidence of poor
adhesion on the ED Ni / ED Cu interface is shown.
Picture 4. Experimental values of hardness difference as
function of ratio between film thickness and the
indentation diagonal, for ED Ni films on Si (100)
substrate deposited under different conditions (Ni1010
denotes the film thickness of 10m and the current
density of 10 mA/cm2). The film deposition parameters
and critical reduced depth b are reported

Picture 5. Experimental data of hardness difference as


function of ratio between film thickness and the
indentation diagonal for different ED Ni films on Si (111)
substrate

Picture 7. Cross-section of the Cu/Ni film with different


layer thickness (a) Ni and Cu sublayer thickness in the
film is 5 m and total thickness of the film is 25 m, (b)
sublayer thickness is 20 m and the total thickness of the
film is 85 m [12]

With increasing ED Ni film thickness, decreasing values of


the critical reduced depth b corresponds to decreasing
539

ONTHECORRELATIONOFMICROHARDNESSWITHTHEFILMADHESIONFORSOFTFILMONHARDSUBSTRATECOMPOSITESYSTEM

With increasing thickness of the film and the substrate for


the EDNi / EDCu system, adhesion decreases. Experiments
have shown good adhesion characteristics for the ED Cu
film thickness up to 10 m. The great quantity of hydrogen
bubbles, captured on the ED Ni/ED Cu interface as the sidereaction of mentioned electrodeposition process, is the
reason for poor adhesion.

OTEH2016

References
[1] Hou, Q.R., Gao,J., Li,S.J.: Adhesion and its influence
on micro-hardness of DLC and SIC films, The
European Physical Journal B, 8 (1999) 493-496.
[2] Chen,M., Gao,J.: The adhesion of copper films coated
on silicon and glass substrates, Modern Physics
Letters B, 14 (3) (2000) 103-108.
[3] Magagnin,L., Maboudian,R., Carraro,C.: Adhesion
evaluation of immersion plating copper films on silicon
by microindentation measurements, Thin solid films,
434 (1) (2003) 100-105.
[4] Raygani,A., Magagnin,L.: Gold metallization on silicon
by galvanic displacement, Electrochemical Society
Transactions, 41 (35) (2012) 3-8.
[5] He,J.L., Li,W.Z., Li,H.D.: Hardness measurement of
thin films separation from composite hardness,
Applied Physics Letters, 69 (10) (1996) 1402-1404.
[6] Khlifi,K., Larbi,A.B.C.: Mechanical properties and
adhesion of TiN monolayer and TiN/TiAlN nanolayer
coatings, Journal of Adhesion Science and
Technology, 28 (1) (2014) 85-96.
[7] Qingrun,H., Gao,J.: Micro-hardness and adhesion of
diamond-like carbon films, Modern Physics Letters B,
11 (16) (1997) 757-764.
[8] Bull,S.J.: Interface engineering and graded films:
Structure and characterization, Journal of Vacuum
Science Tehnology A, 19 (4) (2001) 1404-1414.
[9] Chicot,D., Lesage,J.: "Absolute hardness of films and
coatings", Thin Solid Films 254 (1995) 123-130
[10] Li,H., Bradh,R.C.: The microhardness indentation
load/size effect in rutile and cassiterite single crystals,
Journal of Materials Science, 28 (4) (1993) 917-926.
[11] Lamovec,J., Jovi,V., Randjelovi,D., Aleksi,R.,
Radojevi,V.: Analysis of the composite and film
hardness of electrodeposited nickel coatings on
different substrates, Thin Solid Films, 516 (2008) 8646.
[12] Lamovec,J., Jovi,V., Mladenovi,I., Stojanovi,D.
Kojovi,A., Radojevi,V.: Indentation behavior of
soft film on hard substrate, composite system type,
Zatita materijala, 56 (2015) broj 3.

5. CONCLUSION
Analysis of the composite hardness and film hardness and
adhesion of different composite systems of the same type
(soft film on hard substrate) was given. Thin films of Ni
and Cu were electrodeposited on single-crystal Si substrates
and thick polycrystalline ED Ni films as the substrates,
respectively.
Microindentation measurements were performed on
uncoated substrates and film-substrate composite systems in
order to observe their hardness response according to their
different structures. The composite hardness model of
Chicot-Lesage was found to be appropriate for calculation
of film hardness of different composite systems.
Different microstructures and deformation response and
consequently, different hardness values of the substrate and
the film, as their relative difference are the most important
parameters that influence the composite hardness value.
Adhesion influences the microhardness of the films. A
composite hardness model was used for the evaluation of
the adhesion of the Ni and Cu films on different substrates.
When adhesion exists between the film and the substrate,
the critical reduced depth b (the ratio of the plastic zone
radius to the indentation depth) increases.
The system of thin EDCu film on thick EDNi substrate has
higher values of critical reduced depth b than the system of
EDNi film on Si substrate and it corresponds to better
adhesion properties.
For the same film/substrate combinations, a large value of b
usually corresponds to a good adhesion of the films. When
increasing the indentation load, the hardness difference (H
= HS-HC ) decreases faster for poor adhesion. Increasing of
the film thickness leads to loss of adhesion properties and
possible delamination of the film.

ACKNOWLEDGEMENT
This work was funded by Ministry of Education, science
and Technological Development of Republic of Serbia
through the projects TR 32008, TR 34011 and III 45019.

540

A COMPARISON OF DIFFERENT CONVEX CORNER COMPENSATION


STRUCTURES APPLICABLE IN ANISOTROPIC WET CHEMICAL
ETCHING OF {100} ORIENTED SILICON
VESNA JOVI
Institute of Chemistry, Technology and Metallurgy, University of Belgrade, Serbia, vjovic@nanosys.ihtm.bg.ac.rs
JELENA LAMOVEC
Institute of Chemistry, Technology and Metallurgy, University of Belgrade, Serbia, jejal@nanosys.ihtm.bg.ac.rs
MILE SMILJANI
Institute of Chemistry, Technology and Metallurgy, University of Belgrade, Serbia, smilce@nanosys.ihtm.bg.ac.rs
ARKO LAZI
Institute of Chemistry, Technology and Metallurgy, University of Belgrade, Serbia, zlazic@nanosys.ihtm.bg.ac.rs
BOGDAN POPOVI
Institute of Chemistry, Technology and Metallurgy, University of Belgrade, Serbia, bpopovic@nanosys.ihtm.bg.ac.rs
PREDRAG POLJAK
Institute of Chemistry, Technology and Metallurgy, University of Belgrade, Serbia,
predrag.poljak@nanosys.ihtm.bg.ac.rs

Abstract: This paper presents fabrication of microcantilevers on {100} oriented Si substrate by bulk micromachining.
Two types of CCC (Convex Corner Compensation) structures, namely 100 oriented simple beam and structure using
symmetric rectangular blocks oriented in the 110 direction at the apex of the square peg have been analyzed. Etching
solution has been KOH water solution (80 wt. %) at etching temperature of 80 oC. Detailed construction and etching
behavior of both structures have been given and explained.
Keywords: anisotropic wet chemical etching, bulk silicon micromachining, convex corner compensation, KOH etching
solution, microcantilevers.
feature of anisotropic wet chemical etching and, as a
consequence, severe convex corner undercutting distorts
the desired shape of the structure. Therefore, in such
cases, it is highly desirable to eliminate the undercutting
by some means, e.g. by the design and application of
some convex corner compensation (CCC) structures.

1. INTRODUCTION
The fabrication of micro-electro-mechanical structures
(MEMS) is generally referred to as micromachining [1,2].
Among different micromachining techniques [3] for
manufacturing different micro and nano structures, wet
anisotropic etching has prominent part till the very
beginning of the development of MEMS technologies
during sixties of the 20th century [4,5].

Shape and dimensions of this additional structures


constructed at the corners which have to be preserved
during etching, depend on the type of etching soution and
on spatial requirements. It turns out that all this claims
become more severe as the mesa structure become higher.
Generally speaking, every particular shape with defined
heght have optimum CCC structure with dimensions
obtained from experimental considerations for specific
etching solution.

There is a high demand for 3D structures with a diversity


of shapes including membranes, bossed-type features,
suspended beams of different shapes, seismic masses,
channels, etc. However, the ranges of the obtainable
shapes are limited [6]. During the course of anisotropic
etching, different crystallographic planes have different
etching rates [7,8]. Because of the differences in the
etching rates some planes grow, while others disappear.
While etching a peg-like structure (the so-called mesa
structure) with convex corners the fast etching planes
dominate, i.e. fast-etching planes increase in length while
slow etching planes decrease and finally disappear,
changing the overall island shape. This is an inherent

In this paper we are going to work out two types of CCC


structurs, namely: 100 oriented bar [9] and structure
which contains two rectangular blocks placed
perpendicular with respect to each other at the appex of
convex corner which have to be protected [10]. This
structures are both applicable for fabrication of silicon
microcantilever using bulk micromachining on {100}
541

OTEH2016

ACOMPARISONOFDIFFERENTCONVEXCORNERCOMPENSATIONSTRUCTURESAPPLICABLEINANISOTROPICWET

oriented silicon substrate. Solution for wet anisotropic


etching have been 30 wt% KOH water solution at 80 oC.

was protected during this etching step. During next step


(5) wafer is etched from both sides, and in this step
microcantilever beams are formed when substrate
becomes etched through on unmasked parts. In this step
compensation structure play remarkable role of protecting
convex corner at the beam end from undesired
undercutting.

2. EXPERIMENTAL
N-type {100} oriented Si wafers, 400 25 m thick, both
sides mirror polished, with the resistivity 5-3 cm have
been used for microcantilever fabrication.

Connection between etching depth (d) and etching time


() is given as:

The silicon dioxide was used as a masking material in all


etching steps of microcantilever fabrication. SiO2 (at least
1.4 m thick) has been thermally grown in atmosphere of
water saturated oxygen at 1050 oC, and patterned in
classic photolithography process using EVG620 tool
designed for optical double-side lithography from "EVG
group company".

= R 100 d

where R100 is the {100} oriented Si substrate etching


rate.
To determine what crystallogrphic plane is the fastest
etching plane in considered solution, we did etching under
the same conditions on structure schematically depicted in
Picture 2. This structure has same orientation as
microcantilever, e.g. edges of convex corners are oriented
in 110 direction.

Bulk micromachined microcantilevers are oriented in


110 direction, and sketch of successive fabrication steps
is given in Picture 1. All orientation alignments have been
done toward primary flat direction: 110 0.5 o.

(1)

(4)

(2)

(5)

(1)

250 m
(3)

110

(6)

()
Si
SiO2

Picture 2. (a) Photograph from the metallurgical


microscope of convex corners on the square island with
edges oriented along 110 directions on {100}oriented Si
wafer after etching in 30 wt% KOH solution at 80 oC
during 120 min. White interrupted line shows border of
square island as was defined during photolithograhy on
masking material.

On this sketch (1-6),


section '- is shown
(7)

Picture 1. Sketch of fabrication steps of 110 oriented Si


microcantilever on {100} Si wafer. Anisotropic wet
etching is performed from both sides of the wafer, thanks
to double side alignment during photolithography.

masking material (SiO2)

110

100
110

The etching process was carried out in thermostated Pyrex


vessel containing about 0.8 dm3 solution allowing the
temperature stabilization within 0.5 oC. The vessel is
sealed with a screw on lid which included a tape water
cooled condenser, to minimize evaporation during
etching.
During
etching,
the
solution
is
electromagnetically stirred with 300 rpm and the Si
wafers were held vertically inside the etching solution.

UC

As an example we are going to present fabrication of


microcantilever of 100 m thickness. As can be seen from
Fig. 1, first silicon etching step (4) was done from the
back side. During this step half of the wafer thickness (
200 m) has been etched away from the back. Front side

Rhk0

(b)

Picture 2. (b) Schematic of convex corner structure used


to determine undercutting (UC) and angle .

542

ACOMPARISONOFDIFFERENTCONVEXCORNERCOMPENSATIONSTRUCTURESAPPLICABLEINANISOTROPICWET

OTEH2016

Dependence of undercutting, denoted as UC in Picture 2,


on etching depth (d), or time of etching (), is
indispensable requirement for the analysis of design of
CCC geometries [11-13].

2. RESULTS AND DISCUSSIONS

For the structure shown in Fig. 3, it can be shown that


undercutting is given as:

From the geometry considerations from Picture 4, it can


be shown that [14]:

UC = K d
sin

Picture 4. shows sketch of etching front progress during


the course of 100 compensation beam etching.

(2)

L = d(

R hk 0
2sin
2

),
cos sin R 100 cos sin

(3)

where K is the ratio of etch rates (Rhk0 R100-1), d is etch


depth and is angle between 110 and hk0 directions.

w = 2d ,

Diagram in Picture 3. is experimentally determined


dependence of convex corner undercutting, UC, on etched
depth, d, for 30 wt. % KOH water solution at 80 oC.

where L and W are CCC length and width, respectively,


is angle between 110 and hk0 directions, and Rhk0 is
the etching rate of the fastest etching plane {hk0} at the
convex corner.

700

600

UC = 2.8d
R2 = 0.9976

500

UC, m

(4)

30 % KOH, 80oC

400

300
200

[110]

100

{100}
0
0

50

100

150

200

[100]

masking material (SiO2)

[110]

250

d, m

Picture 4. Schematic representation of successive etch


fronts for used corner compensation structure, namely
100 oriented beam.

Picture 3. Dependence of corner undercutting (UC) on


etching depth (d) for anisotropic wet etching in 30 wt. %
KOH water solution. Edges of square island are oriented
parallel with 110 direction on {100} oriented Si substrate.

From Table 1. it can be seen, that the fastest etching plane


belongs to the 410 family and with given parameters
dimensions of CCC 100 oriented beam structure can be
obtained for desired etching depth (d).

From the slope of straight line and knowledge of the


measured value of angle , it is possible to determine
value of constant K according to eq. (2) for applied
solution.
Obtained results are summarized in Table 1. In this table
nature of the fast etching planes are designated for 30 wt
% KOH solution at 80oC.
Other characteristics of investigated solutions which are
important for microcantilever fabrication by bulk
micromachining of Si are also given in above mentioned
table.
Table 1. Important etching properties of 30 wt. % KOH
solution at 80oC
R100,
mmin-1

Rhk0,
mmin-1

UCd-1

, o

, o

, o

experimental results
1.2

1.4

1.7

2.8

149.2

29.7

150.4

calculated angles for 410


151.9

30.9

149.0

543

Picture 5. Photographs from the metallographic


microscope of successive etching steps of 100
compensation beam on the corner of the microcantilever

ACOMPARISONOFDIFFERENTCONVEXCORNERCOMPENSATIONSTRUCTURESAPPLICABLEINANISOTROPICWET

oriented on 110 direction on Si {100} oriented wafer.


Etching solution is 30 wt. % KOH solutions at 80oC.
Etching times are 20 min (a), 45 min (b) and 90 min (c).

with dimensions calculated in accordance with eq. (4), for


beam width, and eq. (3) for beam length. Picture 6. shows
how well is CC preserved applying such compensation
looking from the front (a), and from the back (b) of
fabricated microcantilever. Photograph in Picture 6. (c) is
given for comparison to notice convex corner shape after
anisotropic wet etching without applying additional
structure for convex corner compensation.

Picture 5. displays photographs from metallographic


microscope of successive etching steps of 100 oriented
compensation beam etched in KOH solution. Picture 5,
(a) and (b) are photographs of preserved convex corner at
microcantilever edge obtained applying 100 oriented
compensation beams.

Schematic representation in Picture 7. presents shows


etching steps of second applied CCC structure in the form
of symmetric rectangular blocks at the very apex of
convex corner.

100 m

()

OTEH2016

At the right corner of this figure photograph from


metallurgical microscop of one CCC on microcantilever
immediately before anisotropic etching is given. With "a"
is denoted length of free space which is on disposal for
constructing CCC structure. Looking in on this schematic
representation we must have in mind that this is etching
from both sides. When the etching front OO1O2 is
reached, the convex corner is defined since substrate must
be etched through in that moment of etching. If this is not
truth, i.e. the etching proceeded from both sides, we are
going to reach figure O'O'1O'2 and convex corner at O
should be undercutted.

(b)

100 m

(c)

Picture 6. Photographs from the metallographic


microscope of fabricated microcantilevers convex corner.
Applied compensation is 100 oriented beam, and
etching was performed in 30 wt. % KOH solution at 80oC.
(a) Is corner appearance from the front side with masking
material, (b) is the same corner as in (a) from the back
side, (c) is the convex corner without any compesation
etched under same condition.

Picture 8. Photographs from the metallographic


microscope of successive etching steps for CC structure
with symmetric rectangular blocks at the island's apex
with edges oriented along 110 directions on {100}
oriented Si substrate. Etching solution is 30 wt. % KOH
at 80oC. Etching times are 20 min (a), 51 min (b) and 65
min (c).
In Picture 8. successive etching steps of this type of
compensation are given. On picture 8. (c) is shown the
last step after which front and back side etching must
meet if we want to preserve convex corner at the
microcantilever.

Picture 7. Schematic representation of successive etching


steps for CC structure with symmetric rectangular blocks
at the island's apex with edges oriented along 110
directions on {100} oriented Si wafer.

Applying the same strategy as for previous compensation


and having in mind that the etching figure OO1O2 have to
be reached, we can calculate length of compensation (L')
and its width (w') as:

Sharp right angled corner with no deformation may be


obtained by applying 100 oriented compensation beam
544

ACOMPARISONOFDIFFERENTCONVEXCORNERCOMPENSATIONSTRUCTURESAPPLICABLEINANISOTROPICWET
K
,
sin

(5)

L' (1 sin )
,
cos

(6)

L' = d

w' =

Based on experimental results for etching rate of the fast


etching plane and ratio of etching rates for this plane and
the substrate plane, lengths and widths for a given
microcantilever thickness were derived.
It was found that both compensation structures perform
well, i.e. thay can preserve shape of convex corner. But it
is obvious that symmetric rectangular blocks at the
island's apex with edges oriented along 110 directions
compensation can be applied only when the anisotropic
etching is performed from both wafer's sides
simultaneously. Also, this compensation structure is more
space consuming.

where all symbols have already been explained.


200 m

100 m

(a)

OTEH2016

(b)

ACKNOWLEDGEMENT
This work was funded by Republic of Serbia Ministry
of Education, Science and Technological Development
through the project TR 32008 named: Micro and
Nanosystems for Power Engineering, Process Industry
and Environmental Protection MiNaSyS.

Picture 9. Second compensation on convex corner, but


anisotropic wet etching is done from the front only.
In Picture 9. are two examples of anisotropic etching
which is applied only from the front side of wafer. It can
be seen that in (a) is a moment when substrate must be
etched completely to preserve convex corner. In (b) it is
clearly seen that further etching results in corner
undercutting.

References
[1] Lindroos, V., Tilli, M., Letho A., Motooka, T.,
Handbook of Silicon Based MEMS Materials and
Technologies, William Andrew, Burlington, MA,
2010.
[2] Madou M., Fundamentals of microfabrication, 1st
ed. CRC Press, Boca Raton, FL, USA, 1997.
[3] Leondes T. C., editor, MEMS/NEMS Handbook
Techniques and Applications, Springer, New York,
2006.
[4] Waggener A. H., "Electrochemically controlled
thining of silicon", The Bell System Technical
Journal, 49(3) (1970) 473-475.
[5] Bean E. K., "Anisotropic etching of silicon", IEEE
Transactions on Electron Devices, ED-25(10) (1978)
1185-1192.
[6] Pal P., Sato K., "A comprehensive review on convex
and concave corners in silicon bulk micromachining
based on anisotropic wet chemical etching", Micro
and Nano Systems Letters, 3(6) (2015) DOI
10.1186/s40486-015-0012-4
[7] Frhauf J., Shape and Functional Elements of the
Bulk Silicon Microtechnique, 1st ed. Berlin,
Germany, Springer, 2005.
[8] Pal P., Sato K., Chandra S., Fabrication techniques
of convex corners in a (100)-silicon wafer using bulk
micromachining:
a
review,
Journal
of
Micromechanics and Microengineering, 17 (10)
(2007) R111-R133.
[9] Mayer G. K., Offereins L. H., Sandmaier H., Khl
K., Fabrication of non-underetched convex corners
in anosotropic etching of (100) silicon in aqueous
KOH with respect to novel micromechanic
elements, J. Electrochem. Soc., 137(12) (1990
3947-3951.
[10] Biswas K., Das S., Kal S., "Analysis and preventation
of convex corner undercutting in bulk
micromachined
silicon
microstructures",

From these experimental results we can say that both


applied compensation for convex corners can be
successfully applied during silicon microcantilevers
fabrication by bulk silicon micromachining.
Also we can say that second compensation is more space
consuming. For CCC structure with symmetric
rectangular blocks at the island's apex with edges oriented
along 110 directions, the length of the free space ("a" in
Picture 7) must be wider than the length of compensation
(L'). In the case of fabrication microcantilever with 100
m thickness, length of compensation structure is 272
m, which require free space of at least 300 m.
When the compensation structure is 100 oriented beam,
required free space must be wider than Lcos45o, which is
less than in the previous case.

5. CONCLUSION
A detailed investigation have been carried out to study
prevention of convex corners undrcutting during
fabrivation of silicon microcantilever beam released by
bulk silicon micromachining applying anisotropic wet
etching in 30 wt. % KOH solutions at 80 oC.
Microcantilevers were fabricated on {100} oriented
silicon substrate by anistropic etching from both sides.
Two types of CC compensation structures have been
applied: simple 100 oriented beem and symmetric
rectangular blocks at the island's apex with edges oriented
along 110 directions.
Determination procedures for the fastest etching plane
and ratio of etching rates of this plane and the substrate
plane were described. These data, together with
microcantilever
dimensions,
are
necessary
for
determination of compensating structures dimensions.

545

ACOMPARISONOFDIFFERENTCONVEXCORNERCOMPENSATIONSTRUCTURESAPPLICABLEINANISOTROPICWET

OTEH2016

243-246.
[13] Jovi V., Smiljani M. M., Lamovec J., Popovi M.,
"Microfabrication by mask-maskless wet anisotropic
etching for realization of multilevel structures in
{100} oriented Si", Proc. 28th International
Conference on Microelectronics (MIEL 2012), Ni,
Serbia, 13-16 May (2012) 139-142.
[14] Pal P., S. S. Singh S. S., A simple and robust model
to explain convex corner undercutting in wet bulk
micromachining, Micro and Nano System Letters,
2013, 1:1

Microelectronics Journal, 37(8) (2005), 765-769.


[11] Smiljanic M. M., Jovic B. V., Z. Lazic S. Z.,
Maskless convex corner compensation technique on
a (100) silicon substrate in a 25 wt% TMAH water
solution, Journal of Micromechanics and
Microengineering, 22 (11) (2012) 1-11.
[12] Jovi V., Lamovec J., Mladenovi I., Smiljani M.
M., Popovi B., "Preventation of convex corner
undercutting
in
fabrication
of
silicon
microcantilevers by wet anisotropic etching", Proc.
29th International Conference on Microelectronics
(MIEL 2014), Belgrade, Serbia, 12-15 May (2014)

546

RADIOACESIUM-137 IN THE ENVIRONMENT AND THE EFFECT OF


RADIATION-HYGIENE CERTIFICATION ON FOOD
NATAA PAJI
Military Technical Institute, Belgrade, natasa.pajic969@gmail. com
TATJANA MARKOVI
Military Technical Institute, Belgrade, tanjin.mejl@gmail.com

Abstract: Radiation hygiene certification involves the identification of biologically significant radionuclides and
determination of their level of activity. Our attention is focused on those radionuclides that affect the occurrence of the
highest dose in tissues of man. Typical representative of this group is a radioactive Cs-137 which is relevant for certification
of nutritional support.
Keywords: radionuclide, dose, activity, certification, radio cesium

1. INTRODUCTION

2. MATERIALS AND METHODS

A man may be exposed to radioactivity from the


environment in several ways. If radioactive substances have
origin from the air, the radioactive particles can be inhaled
or deposited on the skin. Particles deposited on nearby
surfaces may be transferred by wind or human activity, and
also they could be inhaled. Also very complex mechanism is
one that involves a food chain. Ways that expose the man to
radioactive substances from the air and precipitation, as well
as through the food chain are very complex, and the details
of the mechanisms of transport of radioactivity were not
entirely known. Physical, chemical and biological pathways
in the environment are connected through biophysical and
biochemical processes which determine the existence of all
living organisms [1]. Some of these processes lead to
significant dilution rate, others to re-physical or biological
concentration, followed by transfer of radioactivity through
different and sometimes interdependent pathways to man.

For determining the presence of gamma emitters in samples


of food and consumer items, they use high resolution
gamma spectrometry method; samples are homogenized and
placed in Marinelli geometry volume of 1l. After
determining the radioactivity of the sample, assurance is
issued after the hygienic quality of the sample. In
radiological investigations, food chains are used to estimate
the ingestion dose in man, received from radionuclides in
the environment [2]. The transfer of radionuclides in the
biosphere and its content in different stages of
environmental, biological species in groceries, is determined
by the physical and chemical characteristics of the
radionuclide half-lives of radionuclides, weather conditions
and soil composition.

3. ASSESSMENT OF DOSES
In any discharges of radioactivity in the environment the
most important prediction results in doses for people.
Monitoring program should allow the assessment of
effective dose for an individual from the population and it is
compared with the dose limit of 1 mSv per year. Dose
usually involves the use of radioactive transfer models for
environment and measuring contamination of environment.
Mathematical models are based on experimental results, the
characteristics of the discharged materials and the
environment through which it performs the transport, ways
and the exposure of the food chain as well as the
metabolism of appropriate radionuclides with the individual
[3].

There are three different ways of exposure to radioactivity:


outdoor radiation of radionuclides from air and
radionuclides that are stored in the ground,
inhalation of resuspended matter,
radioactivity which originates from food.
The importance of different pathways of exposure is
determined by:
the type (alpha, beta, gamma) of the radiation and
characteristics of radiation (period of half-life),
the physical and chemical characteristics of the
environment (gaseous, liquid, solid),

Assessment of doses is done for an individual from the


critical group. The critical group is a group of people
depending of its location and habits. It is a part of the
population who will receive the highest dose from a given
source of radiation, when the specific activity of
radionuclides in the air, water and food, for the critical

dispersion,
characteristics of the environment (air, wildlife,
agricultural production),
characteristics of the exposed population (diet, habits,
age, location).
547

OTEH2016

RADIOACESIUM137INTHEENVIRONMENTANDTHEEFFECTOFRADIATIONHYGIENECERTIFICATIONONFOOD

group are known. It is necessary to know the time spent in


different conditions of exposure, the amount of inhaled air,
the amount of used food and fluids. Whether the results of
measurement are used and how, it depends on:
-the existence of adequate measurements in the environment
that relate to individuals of critical groups,

Table 1. Standardized consumption of nutritional support

Vegetables

whether the samples are representative,


accuracy of the measurement,
without measurements below the limit of detection of
radionuclides that are released from the source,

Fruit

validity level of the model for a given situation.


In case of accident or uncontrolled release of radioactive
materials, it is necessary to provide information to the
population, about amount of the risk due to inhalation,
ingestion or exposure to external beam radiation.

Meat

4. ASSESSMENT OF THE EFFECTIVE DOSE


OF RADIATION FROM ENTERING Cs-137
BY INGESTION
Cereals

The effective dose of radiation from entering Cs-137 by


ingestion can be calculated based on the results from
measuring the activity of radionuclides in food and the
annual consumption of various food products per person.
The usage of nutritional support depends on the age, the
age, sex, season, personal affinity and habits. Because of
that, for different critical groups, there are used different
information. For average adult individual of the population,
it is considered that a person ingests 550 kg of food per year
(table 1). Therefore, this amount of food is taken as the basis
for standardization of data consumption nutritional
support [4].

Dairy products

The annual
consumption
Type of food
Total in kg
per habitant in
kg
Potato
28.4
Beans, peas
6.8
135.4
Other fresh
100.2
vegetables
Fresh fruits
30.3
and grapes
Nuts and
1.8
36.9
almond
Citrus
4.4
Dried fruit
0.4
Beef
11.9
Pork
60.5
Sheep meat
2.7
Livestock
88.3
7.9
meat
Fish, offal,
horse and
5.3
game meat
Wheat, rye
165.5
Corn
28.4
195.8
Barley
0.7
Rice and
1.2
other cereals
Cheese

9.6

Butter
0.2
Fresh milk
Milk
99.1
and powder
milk
Pork fat
9.2
Other supplies
Oil
11.8
Sugar
29.2
TOTAL
615.5

Average
in kg

121

33

78.9

175

9.8

8.8

99.1

88.5

50.2

44.8

615.5

550

Effective Dose (di) for each radionuclide is calculated by


this formula:
n

di =

m A
j

ji

e( g )i ,ing

(1)

j =1

Where is:
mj

annual consumption of nutritional support

Aji

average annual specific activity of the radionuclide


i in groceries j

e(g)i,ing expected effective dose per unit intake of


radionuclide i ingesting the appropriate age group,
n

Picture 1. The effective radiation dose from


entering Cs-137

the total number of different categories of


nutritional support.

5. THE SPECIFIC ACTIVITY


MEASUREMENT

The calculated effective doses from entering Cs-137 by


ingestion, for average adult population in Serbia in the
period from 2010 to 2015, are given in Picture 1. Based on
these results we can conclude that the effective dose of
radiation for the population of these radionuclides entered
by ingestion, are significantly below the recommended
annual limit of administered dose for an individual, and that
these values in recent years are in very small range because
of the measured specific activities of Cs-137 in the same
categories nutritional support which are at the same
level [5].

The specific activity of Cs-137 in the soil is still


considerable. Regarding cultivated plants (vegetables, fruits,
grains), due to low coefficient of crossing of these
radionuclides from soil to plants, activity in crop plants is
very low in the last 10 years. Average value of the measured
results of specific activities of Cs-137 in the nutrition in
Serbia for the period from 2010 to 2015 are given in Table
2. The number of measured samples, minimum, maximum,
average value and standard deviation for the measurement
carried out in 2015, are given in the Table 3. Food samples
548

RADIOACESIUM137INTHEENVIRONMENTANDTHEEFFECTOFRADIATIONHYGIENECERTIFICATIONONFOOD

were collected in kitchens of VMA continuously, according


to standard procedure of controlling food products.

6. CONCLUSION
The radiation-hygienic expertise established that the level of
critically radiation from analyzed samples is tolerant
regarding to the recommendations of the International
Commission on Radiological Protection (ICRP).

The results of measurements are grouped by categories:


vegetables, fruits, grains, meat and milk. Fish is included in
the category of meat, since in our country do not eat
excessive amounts of fish, also the dairy products are
stressed, because they are very present in our diet.

References
[1] Petrovi,B., Mitrovi,R.: Radijaciona higijena u
biotehnologiji, Beograd, 1991.
[2] Pravilnik o granicama radioaktivne kontaminacije
ivotne sredine i o nainu sprovodjenja
dekontaminacije, Sl.list SRJ br.9, 1999.
[3] Measurment of radionuclides in Food and the
Enviroment, Technical Reports Series No.295, Vienna
[4] Pravilnik o utvrivanju programa sistematskog
ispitivanja radioaktivnosti u ivotnoj sredini, Slubeni
glasnik RS br. 100/2010
[5] IAEA Technical Reports Series No 295-Measurment of
radionuclides in Food and Environment-Section 5Collection and Preparation of Samples
[6] Mitrovi,R., Kljaji,R., Petrovi,B.: 1996. Sistem
radijacione kontrole u biotehnologiji - vodea knjiga.
Monografija, 1-386 str. Izd. Nauni institut za
veterinarstvo "Novi Sad", Novi Sad

Table 2. Specific activities of Cs-137 in the diet in Serbia


Year

Vegetabls
(Bq/kg)

Fruit
(Bq/kg)

Meat
(Bq/kg)

Cereals
(Bq/kg)

0.132
0.032
0.111
0.010

0.097
0.019
0.096
0.017
0.060
0.020
0.066
0.007
0.054
0.005

0.203
0.046
0.576
0.049
0.201
0.040
0.278
0.029
0.209
0.021

0.035
0.003
0.095
0.020
0.075
0.020
0.052
0.001
0.050
0.002

2010
2011
2012

0.16 0.10

2013
2014

0.136
0.014
0.140
0.016

Diary
products
(Bq/kg)
0.102
0.024
0.282
0.124
0.55 0.30
0.201
0.023
0.337
0.081

Table 3. Specific activities of Cs-137 in the diet in Serbia


in 2015. Year
Number of
samples

Minimum
Avarage value
value
(Bq/kg)
(Bq/kg)

Maximum
measured value
(Bq/kg)

Vegetables

20

< 0.06

0.17 0.11

0.45 0.02

Fruit

30

< 0.03

0.30 0.60

4.29 0.30

Meat
Cereals
Dairy
products

22
10

< 0.04
< 0.01

0.30 0.90
0.04 0.04

4.00 0.20
0.40 0.01

18

< 0.03

0.192 0.27

0.80 0.02

OTEH2016

549

SEPARATION OF THE CARBON-DIOXIDE FROM THE GAS MIXTURE


DRAGUTIN NEDELJKOVI
University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Njegoseva 12, 11000 Belgrade, Serbia,
dragutin@tmf.bg.ac.rs
LANA PUTI
University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Njegoseva 12, 11000 Belgrade, Serbia,
lputic@tmf.bg.ac.rs
ALEKSANDAR STAJI
University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Njegoseva 12, 11000 Belgrade, Serbia,
astajcic@tmf.bg.ac.rs
ALEKSANDAR GRUJI
University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Njegoseva 12, 11000 Belgrade, Serbia,
gruja@tmf.bg.ac.rs
JASNA STAJI-TROI
University of Belgrade, Institute of Chemistry, Technology and Metallurgy, Njegoseva 12, 11000 Belgrade, Serbia,
jtrosic@tmf.bg.ac.rs

Abstract: In the recent years, strong efforts are directed to the decrease of the emission of the carbon dioxide and the
other greenhouse gases from most common combustion processes. One of the possibilities is to construct the membrane
that would be highly transparent to the carbon dioxide, but not transparent to the other gases commonly present in the
waste gases (oxygen, nitrogen, hydrogen, methane). As the measurements of the selecitivity and permeability for single
gases were successfully conducted, in this work, the selectivities for the mixture of gases were measured. The
membranes were of the dense type, and their permeability was based on solution-diffusion mechanism. The main focus
were on the analysis of the permeation and separation properties of the hydrogen carbon dioxide mixture. The
membranes were synthesized from polyether-b-amide (with 60% of PEG) as a polymer matrix and two different Lindetype zeolites with different pore geometry. Also, the suitable additive was included in order to provide good contact
between hydrophobic polymer chains in matrix with electricaly charged surface of the zeolyte.
Keywords: gas separation, dense membranes, solution-diffusion mechanism, zeolyte
suitable for the CO2 separation has rapidly increased in
last 25 years, and during that time, various polymer
materials were examined [6], [7]. In the recent years [11],
ethylene oxide units in the polymer chains have been
proved to enhance the solubility of the carbon dioxide,
and to achieve high selectivity of carbon dioxide versus
other gases. However, the pure poly(ethylene oxide)
(PEO) has the strong tendency to crystallize which
negatively affects the gas permeability of the
membrane[12]. Therefore, the co-polymers that contain
EO units can be employed for this purpose. Commercially
available polymer under the name PEBAX (supplier
Arkema, formerly Atotech) has the structure of
poly(amide-b-ether) and can be used as the good
alternative material for this purpose [13]. By the
properties, PEBAX is thermoplastic elastomere (Fig. 1).
As a second choice, the polymer under commercial name
Polyactive (supplied by IsoTis OrthoBiologics) was tested
(Fig 2.)

1. INTRODUCTION
In the recent decades, the global warming appeared as one
of the major treatment to the environment, and the carbon
dioxide (CO2). The carbon dioxide is emitted in the
atmosphere trough the various processes of combustion,
such as industrial energy facilities, power plant facilities,
transport and construction. As the fossil fuels currently
have no alternative at the global level, great efforts are
made in order to reduce the mission of the carbon dioxide.
Currently, the main conventional methods for its removal
are absorption and cryogenic processes are mainly known
[1], [2] and [3]. By the United Nations Framework
Convention on Climate Change (UNFCCC, Colloquially
known as the Kyoto protocol, the emission of the
greenhouse gases has to be reduced by 8% until the end of
2012 [4]. The carbon dioxide separation based on the
membranes is suitable in the small and medium scales
with moderate requirements concerning the purity of the
products. [5]. The interest for the membrane material
550

OTEH2016

SEPARATIONOFTHECARBONDIOXIDEFROMTHEGASMIXTURE

Nuclear and Other Mineral Raw Materials. The zeolites


that are used in this research are presented in Table 1. The
average specific surface of the zeolite was 500 m/g.
Table 1. Properties of different types of zeolite used for
the construction of the membrane
Frame
Channel
Pore size, nm
Type
code
system,
ZSM-5
MFI
3d
0.52 X 0.55
Faujasite
FAU
3d
0.74 X 0.74
Linde Type L
LTL
1d
0.71 X 0.71
Linde Type A
LTA
3d
0.41 X 0.41

Picture 1: Structural formula of Pebax co-polymer


PA stands for the polyamide hard block, and usually is
nylon-6 or nylon-12, while the PE stands for the soft,
amorphous polyether block (polyethylene oxide (PEO) or
polytetramethylene oxide (PTMO)) [14].

Ethanol,
chloroform,
zeolite,
n-tetradecyl
trimethylammonium
bromide
(NTAB)
and
dimethylaminopyridine (DMAP) were used as received.
The aim of the addition of the filler is to provide good
contact between the highly charged zeolite particles, and
hydrophobic polymer matrix. It was supposed that long,
normal chain with the charged end would act as the
connector between the two phases. For the DMAP it was
also supposed that it would enhance the solubility of the
carbon dioxide due to its alkali properties. The Pebax was
dissolved in the distilled water/ethanol mixture (70/30
wt.%). The solution (3 wt.% of PEBAX) was stirred for
two hours, at the 80C under reflux. In the case of the
Polyactive membranes, the chloroform was used as a
solvent, and the solution process was conducted at the
room temperature. At the same time, the zeolite particles
were dissolved in the same solvent as the polymer, and
additive was added (for the samples with the additive).
The ultrasound mixture by the titanium head was used for
the homogenization. The full power was applied during
five minutes, in order to avoid the detachment of the
titanium nanoparticles from the head and the
contamination of the suspension. This dispersion was
poured in the solution of the polymer, and stirred
overnight. The stirring temperature for the Pebax
membranes was 80C under reflux, and for the Polyactive
membranes room temperature. The long stirring time was
needed in order to get as homogenous solution as
possible. Viscous solution that came as the result, was
casted on the Teflon surface, with the Teflon ring used as
the border. The aim of the application of Teflon was to
avoid stitching of the membrane to the surface during the
drying process. The solution was covered with non-woven
textile, and left overnight for the drying at the room
temperature and ambient pressure. The drying process had
to be slow to avoid the formation of the bubbles which
negatively affects the permeation properties of the
membrane. If the viscosity of the solution is too high,
surface tension dominates the casting process, and the
resulting membrane has the uneven thickness. On the
other hand, to low value of the viscosity, increases the
sedimentation speed of the particles trough the solution of
the membrane, and the result is the membrane with nonhomogenous dispersion of the zeolite particles trough the
volume. If the latter is the case, the membrane self-rolls,
and its application is negatively affected. After the drying
at the room pressure and temperature was placed on the
high vacuum line in order to remove any traces of the
residual solvent.

Picture 2: Structural formula of Polyactive co-polymer


As it can be seen from the Fig. 2, Polyactive consists of
polyethylene glycol (PEG) and polybutylene terephthalate
(PBT). The ratio PEG:PBT is 77:23 (weight %) with PEG
of molecular weight of 1500 g/mol
The chemical, physical and mechanical properties of both
of the polymers can be easily modelled by the simple
variation of the molar ratio of the blocks [16]. Both Pebax
and Polyactive have been shown as promising membrane
materials for acid gas treatment [17], [18]. The high
selectivity of the carbon dioxide versus nitrogen and
hydrogen was reported for the membranes based on those
polymers [17]. High selectivity were attributed to the
presence of the strong affinity of the ether or ester bonds
to the carbon dioxide solution. The high permeability and
selectivity of both carbon dioxide and sulphur dioxide
versus nitrogen due to the polarizability of gases in the
presence of PEO segments was also reported [18]. In
order to increase the permeability and selectivity of the
membrane, the mixed matrix type of membranes with the
inorganic particle and polymer matrix can be constructed.
The bulky phase is typically a polymer with the PE block
that enhances the solubility of carbon dioxide by itself,
and the dispersed phase is inorganic particles [23] and
[24]. Those particles can be zeolites, carbon molecular
sieves or nanoparticles. The aim of the inorganic filler is
to improve the selectivity, permeability comparing to the
polymeric membrane due to their inherent separation
characteristics. Due to the flexibility of the polymer used
as the matrix, fragility as the main problem of the
inorganic membrane is avoided.
The first attempts of improvement of the permeability of
the mixed matrix membranes was reported 30 years ago,
when the diffusion time lag of the carbon dioxide and
methane was discovered [25]. Authors have observed that
addition of the zeolite increases the time lag, but has
apparently no effect on the steady-state permeation. [26].

2. EXPERIMENTAL
The Pebax and Polyactive polymers were supplied by the
Arkema and IsoTis OrthoBiologics respectively. The
zeolites were supplied by the Institute of Technology of

551

OTEH2016

SEPARATIONOFTHECARBONDIOXIDEFROMTHEGASMIXTURE

The gas permeability measurements were carried out by


the time lag method. The solubility (S), diffusivity (D),
permeability (P) and selectivity () were determined by
the equations[28] and [29]:
P = DS =

The first evaluation was done by the bare eye. Properly


made membrane should be transparent, or pale, smooth at
the touch, flat and without visible pinholes or damages.
Opaque membrane indicates that the contact between the
zeolite and polymer chains is not good, and therefore, the
light transmits at the polymer-zeolite particle surface.
Rough surface indicates uneven distribution of the
particles trough the volume of the membrane. Self-rolling
of the membrane indicates the sedimentation of the
particles, and thus uneven distribution of the zeolite.

V p l ( pP 2 pP1 )
ART t ( p f pP1 ) / 2
2
D= l
6

A/ B =

The composition and evaluation of the membranes from


the series I are given in Table 3.

PA DA S A
=
PB DB S B

In those equations, Vp stands for the constant permeate


volume, l for the thickness of the membrane, A for the
area of the membrane, R for the universal gas constant, t
for the time that permeate pressure needs to increase from
value pp1 to value pp2, pf for the feed pressure, and for
the time lag. The solution-diffusion model was used for
the analysis of the gas transport properties of the
membranes [30]. The selectivity of the membrane for the
gas A versus gas B was defined as the ratio of their
permeability.

Table 3: The composition and the appearance of the


membranes of the Series I
Membrane Number Zeolite Filler
Filler, %
I-1
FAU
22
I-2
FAU
22
I-3
FAU
22
I-4
LTA
22
I-5
LTA
22
I-6
LTL
22
I-7
ZSM
22

The procedure of the permeability measurements was as


follows: After 30 minutes at the high vacuum line, the gas
that was measured was applied at the one side of the
membrane, while the other side is kept on vacuum.
Therefore, the driving force for the diffusion was pressure
gradient. The change of the pressure during the time was
measured at the vacuum side of the membrane was
monitored, and the permeation properties of the
membrane were determined by the equations 1-3. The
measurements of the gases were done in the following
order: hydrogen, nitrogen, oxygen, carbon dioxide.
Between measurements of the different gases, the
membrane was kept under high vacuum for 15 minutes in
order to remove residual gas. The reason for such an order
of the gases was to avoid consecutive measurements of
flammable gases.

From the data presented in Table 3., it is obvious that only


the LTL type of the zeolite can be used as the basis for the
construction of the further membranes. The contact between
the particles of zeolite of FAU, LTA and ZSM type of the
zeolite was not good, but construction of the membrane with
those fillers and NTAB and DMAP was attempted. White
spots on the membrane constructed with FAU came as a
consequence of the agglomeration of zeolite particles.
Possible reason for this agglomeration is strong electrostatic
forces between the particles of the filler that overcome the
viscosity of the polymer solution during the drying
procedure. Areas of the uneven colour indicate the nonstationary drying process which causes rapid local variations
in viscosity of the solution, and therefore, the agglomeration
was allowed is some areas of the membrane.
Table 4: The composition and the appearance of the
membranes from the Series II

3. RESULTS AND DISCUSSION

Membrane number Porous Filler


II-1
II-2
FAU
II-3
FAU
II-4
FAU
II-5
LTA
II-6
LTL
II-7
LTL
II-8
ZSM

Six different series of the membranes were constructed,


three with the each polymer. In the first series, no additive
was added, and the membrane was constructed from
polymer and zeolite. In the second series, the additive was
NTAB, and in the third, DMAP. The composition and the
appearance of the membranes are presented in Tables 2.,
3. and 4. It should be noted that weight and ratios are
calculated as the fraction of the overall weight of the
membrane. The compositions of the membranes are
compiled in Table 2.

Filler, %
22
22
22
22
22.5
23
22

The membrane II-1 was made solely of polymer and


detergent additive in order to check their compatibility.
As the transparent membrane was yielded, it is reasonable
to conclude that this detergent is a promising additive for
the compatibillisation. Analyzing the results of the
membranes constructed with this additive and other
zeolites, it is obvious (Table 4) that only FAU and LTL
types of zeolite could be used for the construction of the
acceptable membrane. Those results are in the supports
the results gained in the experiment of the membrane

Table 2: The composition and the appearance of the


membranes made without the additive
Number of Series
Polymer
Additive
Series I
Pebax
Series II
Pebax
NTAB
Series III
Pebax
DMAP
Series IV
Polyactive
Series V
Polyactive
NTAB
Series VI
Polyactive
DMAP
552

OTEH2016

SEPARATIONOFTHECARBONDIOXIDEFROMTHEGASMIXTURE

construction without additive. Those results indicate that


the ZSM and LTA zeolites cannot be used for this
purpose. In order to check the other possible approach to
the zeolite-polymer compatibillisation, the DMAP was
used as an additive. It was supposed that although it does
not contain long chain and the electrical charge, it still can
serve as the compatibilizing additive. It is also supposed
that alkali properties of the amine would increase the
solubility of the carbon dioxide in the membrane. The
results of the attempted membrane construction are
compiled in Table 5.

became visible. Similarly to the Pebax based membranes


(Series I), the LTL has been proved to be a good additive,
resulting in the smooth, transparent membrane (Samples
IV-7 and IV-8). The Samples IV-5 and IV-6 (constructed
with FAU) resulted in the membrane that contain white
spots on the surface. Although it can appear similar to the
case of Pebax based membranes, the agglomeration is
present in this case, instead of the bad surface contact in
the case of Pebax. This implies the different nature of the
surface behaviour of the same zeolyte in when dispersed
in the same polymer. The application of ZSM zeolyte
resulted in the white, non-transparent membrane in both
of cases (Samples I-7 and IV-9). Therefore, it was
decided to include ZSM during the construction of the
membranes with NTAB as an additive (Series V). The
composition of the membranes and their evaluation is
presented in Table 7.

Table 5: The composition and the appearance of the


membranes of the Series III
Membrane number Porous Filler
Filler, %
III-1
LTL
21.1
III-2
LTA
21.1
III-3
FAU
21.1
III-4
ZSM-5
21.1

Table 7: The composition and the appearance of the


membranes from the Series V
Membrane number Porous Filler
Filler, %
V-1
V-2
LTA
22
V-3
LTA
22
V-4
FAU
22
V-5
LTL
22
V-6
LTL
22
V-7
ZSM
22

Analyzing the data presented in Table 5., it is obvious that


all of the membranes made with DMAP as an additive
appeared white, and therefore, the contact between the
surface of the particle and polymer matrix was bad.
Hence, it might be concluded that DMAP cannot be
applied as an additive for the purpose of
compatibillisation in this system.
The Polyactive based membranes (Series IV-VI) were
constructed in the manner similar to the Pebax ones
(Series I-III). The notable difference was that chloroform
was used as a solvent instead of the water/ethanol
mixture. The application of the chloroform would allow
easier removal of the residual solvent from the membrane,
due to the hogh volatility of the chloroform. Alternatively,
tetrahydrofuran (THF) can be used for this purpose. The
amounts of polymer, zeolytes and additives were
analogous to the Pebax based membranes. The
composition and evaluation are presented in Table 6.

Based on the observation of the Sample V-1, it can be


seen that the NTAB itself does not agglomerate, and
therefore it can be used as an additive. Nevertheless,
similarly to the analogous Pebax membranes, only the
LTL resulted in a transparent and smooth membrane. The
addition of the NTAB to the LTA type of zeolyte
(Samples V-2 and V-3) results in the formation of white
areas on the membrane. This might be the consequence of
excess of the zeolyte, so not all the particles might be
covered. However, if the amount of the additive is further
increased, the precipitation of the zeolyte occurs. The
reason for the precipitation is similar to the reason for the
agglomeration, and can be attributed to the electrostatic
forces. The presence of the additive does not influence the
appearance of the membranes constructed with FAU and
ZSM zeolytes (V-4 and V-7 respectively). The sample V4 shows strong agglomeration of the inorganic filler,
while the sample V-7 provides no contact between the
zeolyte particles and polymer. Their behaviour is
analogous to the behaviour of samples constructed
without the additive (samples IV-5, IV-6 and IV-9).

Table 6: The composition and the appearance of the


membranes of the Series IV
Membrane number Porous Filler
Filler, %
IV-1
IV-2
LTA
22
IV-3
LTA
22
IV-4
LTA
32
IV-5
FAU
22
IV-6
FAU
22
IV-7
LTL
22
IV-8
LTL
22
IV-9
ZSM
22

The membranes based on the Polyactive were also


constructed with the DMAP as an additive in order to test
the possibility of enhancement the solubility of carbon
dioxide. The gained results are presented in Table 8.

The Sample IV-1 was constructed from the pure


Polyactive in order to compare the properties of the pure
polymer to the literature data. The results obtained in the
experiment were slightly different from the specification.
This difference might be attributed to the different batch
of the polymer, or eventually the error in the
measurement. The samples IV-2 and IV-3 were made
with LTA zeolyte, and the firs evaluation have given the
good results. However, when the increase in the
concentration of the zeolyte was attempted (Sample IV4), the agglomeration occurred, and the white spots

Table 8: The composition and the appearance of the


membranes of the Series III
Membrane number Porous Filler
Filler, %
VI-1
LTA
21
VI-2
FAU
21
VI-3
LTL
21
VI-4
ZSM
21

553

OTEH2016

SEPARATIONOFTHECARBONDIOXIDEFROMTHEGASMIXTURE

As it is obvious from the Table 8, no applicable


membrane was made with the DMAP as an additive. The
membranes with LTA and FAU (samples VI-1 and VI-2
respectively) provided no good contact between the
zeolyte and polymer. In the case of the LTL (Sample VI3), the zeolyte that has shown the best results in previous
systems, the rapid sedimentation occurred, and the
membrane was rough at the one side and rolled due to the
uneven distribution of particles. In the case of the ZSM
(Sample VI-4), drying resulted in the powder, rather than
the membrane. Based on the outcome of membranes with
the DMAP as the additive (Series III and VI), it is
reasonable to conclude that the DMAP cannot be used for
the compatibilizataion of Pebax and Polyactive
membranes.

Polyactive resulted in the membrane with the comparable


permeability and selectivity, but with the 20%-30% of the
thickness reduction. Concerning the Pebax based
membranes, FAU and LTL yielded acceptable results,
with the permeability and selectivity comparable to the
Polyactive based membranes.

4. CONCLUSION
In this paper, the possibility of the construction of the
mixed matrix membrane based on the polymer matrix and
surface treated inorganic powder was examined. The
membranes were constructed with two different types of
polymer, four different zeolites, and two different
additives. The optical testing has shown that not all of the
combinations are suitable for the construction of the
membrane. Concerning the membranes based on Pebax,
only LTL and FAU types of zeolites are compatible with
both surface treatment reagent and polymer. In the case of
the ZSM and LTA types of zeolite, good contact between
the particles and polymer chains could not be provided. In
the case of the Polyactive based membranes, only
untreated LTL and LTA could be used. The NTAB has
been shown to be good additive that does not negatively
influence the properties of the membranes. On the
contrary, DMAP did not yielded acceptable results. The
permeability of the polymer membrane filled with the
zeolite increased by approximately the factor of 2, with
the retaining the selectivity of the membrane. Therefore,
in may be concluded that mentioned polymer-zeolites
combinations and NTAB as an additive are promising
base for the future research. However, the downsizing of
the thickness of the membrane will be the main challenge
for the future work. As the main application of those
membranes is planned to be in the wet conditions, the
next step in this research would be to measure the
selectivity and permeability of the zeolite-filled
membranes in the wet conditions.

Prior to the measurement of the membranes, permeability


of samples II-1 and V-1 were measured. The reason was
to determine weather the addition of the NTAB affects the
permeability of pure polymer. The outcome of the
measurements clearly indicates that there is no change in
the permeability of the pure polymer when the NTAB is
dispersed.
For the measurements of the gas permeability and the
selectivity, only smooth and transparent membranes were
chosen. This includes membranes with LTL and FAU
(Pebax) and membrane with LTL (Polyactive). The
Polyactive based membrane with LTA and LTL without
NTAB were measured as well.
The results of permeability and selectivity measurements
are compiled in Table 9.
Table 9: The results of the permeability measurement of
the membranes

Memb. Thic., P (CO2),


Barrer (CO2/H2) (CO2/O2) (CO2/N2)
no.
m
II-3
252
88
9.1
23
48
II-6
174
128
9.7
22
55
II-7
150
131
9.4
20
52
IV-2
217
142
11.8
23.5
62.4
IV-3
191
139
10.9
21.7
60.4
IV-7
121
130
9.2
20.9
54.7
IV-8
162
135
9.5
21.1
54.8
V-5
232
142
11.6
22.0
61.1
V-6
199
137
11.3
21.7
60.5

ACKNOWLEDGEMENT
The authors would like to acknowledge the financial
support of the Ministry of Science of Serbia, trough the
research projects TR 34011 and III 45019.

REFERENCES

It should be noted that the usual unit for the gas


permeability of the membrane in the membrane research
community is Barrer. One Barrer is the permeability of 1
cm3 of a gas under the standard pressure and temperature
conditions, trough the 1 cm2 of the area and 1 cm of the
thickness driven by the pressure gradient of 1 cmHg in 1 s
divided by the factor of 10-10.

[1] A.B. Rao and E.S. Rubin, A technical, economic, and


environmental assessment of amine-based CO2
capture technology for power plant greenhouse gas
control. Environ. Sci. Technol., 36 (2002), pp. 4467
4475
[2] U. Desideri and R. Corbelli, CO2 capture in small
size cogeneration plants: technical and economical
considerations. Energy Convers. Manage., 39 (1998),
pp. 857867
[3] A. Meisen and S. Xiaoshan, Research and
development issues in CO2 capture. Energy Convers.
Manage., 38 (1997), pp. 3742
[4] The United Nations Framework Convention on
Climate Change, Kyoto, 1997
[5] W.J. Koros and G.K. Fleming, Membrane-based gas

Analyzing the permeability data presented in the Table 9,


it is obvious that all of the membranes that appeared
transparent have shown good and comparable
permeability and diffusivity properties. The best results
regarding the permeability and selectivity were gained by
the dispersion of the LTA type of the zeolyte in the
Polyactive. The results were good for both surface treated
and non-treated zeolyte. The dispersion of the LTL in the

554

OTEH2016

SEPARATIONOFTHECARBONDIOXIDEFROMTHEGASMIXTURE

separation. J. Membr. Sci., 83 (1993), pp. 180


[6] S.P. Nunes, K.-V. Peinemann, Editors , Membrane
Technology in the Chemical Industry, (second
revised and extended edition), Wiley-VCH Verlag
GmbH, Germany (2006), pp. 53150
[7] K. Ghosal and B.D. Freeman, Gas separation using
polymer membranes: an overview. Polym. Adv.
Technol., 5 (1994), pp. 673697
[8] H. Lin and B.D. Freeman, Materials selection
guidelines for membranes that remove CO2 from gas
mixtures. J. Mol. Struct., 739 (2005), pp. 5774
[9] H. Lin and B.D. Freeman, Gas solubility diffusivity
permeability in poly(ethylene oxide). J. Membr. Sci.,
239 (2004), pp. 105117
[10] L.A. Utracki, History of commercial polymer alloys
and blends, . Polym. Eng. Sci., 35 (1995), pp. 217
[11] G. Deleens, N.R. Legge, G. Holder, H.E. Schroeder,
Editors
,
Thermoplastic
Elastomers:
A
Comprehensive Review, Hanser Publishers, New
York (1987), pp. 215230
[12] M. Yoshino, K. Ito, H. Kita and K.-I. Okamoto,
Effect of hard-segment polymers on CO2/N2 gas
separation properties of poly(ethylene oxide)-

segmented copolymers. J. Polym. Sci. Part B Polym.


Phys., 38 (2000), pp. 17071715
[13] V. Bondar, B.D. Freeman, I. Pinnau and Gas
Sorption, Characterization of poly(ether-b-amide)
segmented block copolymers. J. Polym. Sci. Part B
Polym. Phys., 37 (1999), pp. 24632475
[14] A. Car, C. Stropnik, W. Yave, K.-V. Peinemann,
PEG modified poly(amide-b-ethylene oxide)
membranes for CO2 separation, Journal of
Membrane Science, Volume 307, Issue 1, (2008),
Pages 88-95
[15] A. Car, C. Stropnik, W. Yave, K.-V. Peinemann,
Tailor-made Polymeric Membranes based on
Segmented Block Copolymers for CO2 Separation,
Advanced Functional Materials, Volume 18, Issue
18, (2008), pp. 28152823
[16] D.R. Paul and D.R. Kemp, The diffusion time lag in
polymer membranes containing adsorptive fillers. J
Polym Sci: Polym Phys, 41 (1973), pp. 7993
[17] Kulprathipanja S, Neuzil RW, Li NN. Separation of
fluids by means of mixed matrix membranes. US
patent 4740219, 1988

555

ELECTRODEPOSITION OF METAL COATINGS FROM EUTECTIC


TYPE IONIC LIQUID
MIHAEL BUKO
University of Defence, Military Academy, Veljka Lukia Kurjaka 33, Belgrade, mbucko@tmf.bg.ac.rs
JELENA B. BAJAT
Faculty of Technology and Metallurgy, Karnegijeva 4, Belgrade, jela@tmf.bg.ac.rs

Abstract: The aim of this paper is to investigate the process of metal coating electrodeposition, from an unconventional
electrolyte, based on eutectic type ionic liquid. Eutectic mixture has been prepared by mixing two simple and affordable
organic compounds, choline chloride and urea. The main advantage of this ionic liquid, as compared to conventional
aqueous electrolytes, is higher current efficiency for metal electrodeposition due to inhibited hydrogen evolution reaction.
The described ionic liquid has been used in our work, for the electrodeposition of Zn-Mn alloy coating on steel substrate.
The current efficiency for the alloy electrodeposition has been calculated by measuring the weight gain of the samples. The
reduction process of Zn2+ and Mn2+ ions has been followed by cyclic voltammetry. The influence of deposition parameters
on the morphology and chemical composition of the coatings has been determined by scanning electron microscopy.
Keywords: electrodeposition, coating, eutectic mixture, ionic liquid, choline chloride.

1. INTRODUCTION
Deep eutectic solvents (DES) represent a new class of ionic
liquids and refer to the mixture of two solid compounds
whose melting temperatures are usually above room
temperature but when mixed together form a compound that
has a melting point lower than room temperature, as
illustrated in Figs. 1a and 1b [1]. DES are formed from
eutectic mixtures of a quaternary ammonium salt with a
hydrogen bond donor species as a carboxylic acid, a glycol
or an amine (an example is given in Fig. 1c), and are known
for their ability to highly dissolve metal salts and oxides as
well as organic materials such as cellulose [2].
Choline chloride-based DES have been successfully
assessed for electrodeposition of different metals (Cr, Mn,
Cu, Ag, Fe, Zn) and alloys (Zn-Cr, Zn-Sn, Zn-Ni, Zn-Mn,
Ni-Co) on different substrates, producing films with
characteristics that are completely different from those
obtained from aqueous electrolytes [3].
The Zn-Mn alloys electrodeposited from ionic liquids are
expected to be extremely superior in corrosion resistance
compared to those from aqueous solutions, this is due to the
numerous benefits afforded by ionic liquids such as
avoidance of water chemistry defects and higher
electrodeposition current efficiencies [4]. However, the
literature on the electrodeposition of Zn-Mn alloy from DES
is very scarce, and the mechanism of the reduction
processes occurring during the alloy deposition has not been
investigated in detail [5, 6].

556

ELECTRODEPOSITIONOFMETALCOATINGSFROMEUTECTICTYPEIONICLIQUID

OTEH2016

2.3. Coating characterization


The Zn-Mn coatings were deposited galvanostatically, i.e. at
constant current densities in the range between 2 and 10 mA
cm-2. The surface morphology and composition of the ZnMn samples obtained from DES were analyzed by a JEOL
JSM 5800 scanning electron microscope (SEM), equipped
with an Oxford Instruments energy dispersive X-ray
spectrometer (EDS).

3. RESULTS AND DISCUSSION


3.1. Cyclic voltammetry research
Figure 1. (a) solid compounds used to prepare eutectic
liquid; (b) eutectic behavior of choline chloride-urea
mixture; (c) choline chloride and urea structure

The CV technique was used to investigate the working


potentials for Zn and Mn metal ion reduction from the DES,
and also to get insight into the cathodic processes in the
supporting electrolyte alone. To study the reduction process
of various ions on steel cathode, the cyclic voltammograms
were recorded in the potential range between 1900 and
600 mV, at a scan rate of 100 mV s 1.

Therefore, in this paper we analyze the electrochemical


processes on steel cathode during the electrodeposition of
Zn-Mn alloy. The results were obtained by using cyclic
voltammetry. In addition, the influence of the deposition
parameters on the appearance and composition of the
coatings was investigated by scanning electron microscopy.

At the beginning, the voltammetric response of a neat DES


was recorded at different hydrodynamic conditions,
provided by different rotation rates of magnetic stirrer, as
shown in Fig. 2. At scanning from the open circuit potential
of the steel electrode to negative potential, it can be noted
that, prior to the sharp current increase related to the bulk
electrolyte decomposition at around 1370 mV, there are
two reduction peaks. The first peak (c1) starts at about 660
mV, and the second (c2) at about 990 mV. The processes
responsible for these waves were studied in detail in [7],
where it was concluded that the peak c1 is related to the Cl
ion reductive desorption, while the intensity of the peak c2
enhanced upon water addition in ChClurea DES, proving
that c2 represents the hydrogen evolution process from water
impurity molecules or other hydrogen bond donor, i.e. urea
[7]. Although the prepared DES solutions in our work were
dried and stored in a desiccator before use, still some
moisture was absorbed into the ionic liquid electrolyte
during manipulation and experiments, so the peak c2
response is understandable in this work.

2. EXPERIMENTAL PART
2.1. Ionic liquid preparation
Deep eutectic solvents were obtained by combining urea
with choline chloride (ChCl) in a 2:1 molar ratio and
heating to a temperature of 70 C, with continuous magnetic
stirring, until a colourless liquid was formed. Both ZnCl2
and MnCl2H2O, at concentrations of 0.1 mol dm 3, were
then added to the mixture. The prepared DES solutions were
dried in vacuum chamber at 80 C for 3 h, at p < 10 mbar,
and then stored in a dessicator. All electrochemical
experiments were performed in open atmosphere.

2.2. Electrochemical analysis


Electrochemical measurements, i.e. cyclic voltammetry
(CV) and galvanostatic deposition were carried out using a
ZRA Reference 600 Potentiostat/Galvanostat, from Gamry
Instruments. A laboratory sand bath was used to maintain
the temperature of the bulk electrolyte at 70 C for all
electrochemical measurements in the DES.
A three-electrode electrochemical cell was employed with a
working electrode of steel, set up as a 0.25 cm2 plate. The
electrode was mechanically prepared using abrasive emery
papers down to 2000 grit, degreased in a saturated solution
of NaOH in ethanol, pickled with 2 mol dm3 HCl for 30 s,
and finally rinsed with distilled water, acetone and dried in
air by a fan. The counter electrode was a high purity Zn
panel. The reference electrode was a saturated calomel
electrode (SCE), connected to the cell through a Luggin
capillary tip. Although conventionally used in aqueous
electrolytes, this reference electrode has also been
successfully used in deep eutectic solvent, without any
special treatment. After deposition the electrodeposited
coatings were thoroughly cleaned with acetone.

Figure 2. Cyclic voltammograms of ChCl urea basic


electrolyte on steel cathode for two rotation rates of
magnetic stirrer. Scan rate 100 mV s 1

557

ELECTRODEPOSITIONOFMETALCOATINGSFROMEUTECTICTYPEIONICLIQUID

In order to analyze the electrochemical behavior of Mn2+


species, CV was conducted in ChCl urea MnCl2 solution
at three rotation speeds of the magnetic stirrer (Fig. 3).
Apart from the broad peaks already identified in the blank
DES, there is no other cathodic current peak which could be
related to the Mn2+/Mn reduction, nor the corresponding
anodic stripping peak. However, the current increase related
to the Mn deposition was observed in voltammetric analysis
of Mn2+ reduction in DES in literature, on Au [8] and Cu
electrode [4]. So, our work implies that Mn2+ species is not
reduced in ChCl urea DES on steel substrate at potentials
more positive to the potential of the electrochemical
degradation of the bulk ChCl-urea electrolyte.

OTEH2016

sulfuric acid, although the mechanism how the metal ions


having more electronegative reversible potential than that of
hydrogen evolution process, actually may influence this
process, is still not completely understood [11]. Some of the
suggested mechanisms are specific adsorption,
underpotential deposition, or the effect on the potential at
the outer Helmholtz plane of the double layer [10, 11].
Further inspection of the voltammograms reveals that,
although the reduction of bulk Mn on steel substrate was not
observed in cyclic voltammogram of ChClurea DES
containing MnCl2 (in Fig. 3), when Zn2+ ion is present in
DES in addition to Mn2+, a clear reduction peak appears at
1770 mV (in Fig. 4), which is due to the reduction of
Mn2+ ion. In other words, the Mn co-deposition occurs at a
more positive potential than that for bulk deposition of Mn.
Based on literature data on the electroreduction in ChCl
urea deep eutectic solvent, it may be suggested that the peak
at 1770 mV, results from the reduction of Mn on the Zn
nuclei that were freshly deposited at more positive
potentials (peak C3). Similarly, it was shown by others that
peak related to Zn2+ reduction on Cu substrate was shifted to
more positive potential if Ni2+ ions were previously reduced
on Cu [3], or that ChClurea DES decomposition occurred
at more positive potential when fresh Pt nuclei were formed
on vitreous carbon during the same voltammetric scan [7].

The comparison of the voltammogram shapes at three


rotational speeds in Fig. 3, as well as at the two speeds in
Fig. 2, shows that the processes of chloride desorption and
water impurity reduction are intensified with the solution
stirring when the rate of stirring is changed from 0 to 300
rpm, but with further increase of rotation rate to 500 rpm,
there is a negligible increase in both peak current densities.
It is well known that, when the voltammogram peak height
increases with rotation rate at low transport rates, but
becomes independent on the rotation rate at higher transport
rates, it is a good diagnostic proof that besides the mass
transfer limitation of the process, there are also limitations
due to adsorption/desorption and surface reaction [9].

Figure 4. Cyclic voltammogram of ChCl urea 0.1 mol


dm 3 MnCl2 0.1 mol dm 3 ZnCl2 electrolyte on steel
electrode. Scan rate 100 mV s 1

Figure 3. Cyclic voltammograms of ChCl urea 0.1 mol


dm 3 MnCl2 electrolyte on steel electrode for three
rotation rates of magnetic stirrer. Scan rate 100 mV s 1
The cyclic voltammogram of 0.1 mol dm3 ZnCl2 + 0.1 mol
dm3 MnCl2 in DES recorded on steel is presented in Fig. 4.
The four current peaks that are clearly seen, may be
ascribed, from positive towards negative potential
respectively, to the processes in blank electrolyte, Zn2+
reduction and finally, Mn2+ reduction.

3.2. Zn-Mn coating characterization


In order to determine the influence of deposition conditions
on the chemical composition and the surface morphology of
ZnMn alloys, the coatings were galvanostatically
deposited at c.d.s in the range from 2 to 10 mA cm2. The
variation of Mn content in the alloy as a function of
deposition c.d., determined by EDS analysis, is shown in
Table 1. The sample deposited at 2 mA cm2 is Mn free,
denoting that the working potential of the electrode has not
reached the value for Mn2+ reduction. By increasing c.d. to 3
mA cm2, the Mn content sharply increases to 28 at.%, and
then, between 3 and 8 mA cm2, it oscillates in the narrow
range of 25 31 at.%. Further increase in deposition c.d. to
10 mA cm2 causes sharp decay in manganese percentage in
the alloy.

When the CVs of metal free DES (Fig. 2) are compared to


the voltammograms of DES with metal ions (Figs. 3 and 4)
it is clear that the bulk electrolyte decomposition, indicated
by the sharp current increase, is shifted to more negative
potentials when any of the two metal ions are present in the
solution. Similar phenomenon was reported for example in
water solution of sulfuric acid, [10] when it was shown that
the addition of Zn2+, Mn2+ or Cd2+ to the sulfuric acid
inhibited hydrogen evolution reaction on steel. This
behavior was later used in order to inhibit iron corrosion in
558

ELECTRODEPOSITIONOFMETALCOATINGSFROMEUTECTICTYPEIONICLIQUID

It may be observed that, contrary to the continuous growth


in Mn percent, typically seen in water solutions, there is a
sharp increase in Mn content at 3 mA cm 2 in DES, which
is quite low deposition c.d., and after that, the Zn/Mn ratio
is almost constant and, even, Mn percentage decreases at
higher c.d.s, as Table 1 depicts. The most reasonable
suggestion for us, to explain such Zn/Mn ratio in the alloy
obtained from DES, would be that, unlike in water
electrolyte, where after reaching certain c.d., Zn deposition
is under diffusion control, this is not the case in DES. In
other words, in the range of c.d.s under the scope (3 10
mA cm 2), for both metal species, the limiting step in their
reduction may be accelerated by switching the deposition
potential more negative, which would not be the case for a
diffusion limiting step.

OTEH2016

Figure 5. SEM micrographs of ZnMn coatings


deposited on steel substrate from ChCl urea 0.1 mol
dm 3 MnCl2 0.1 mol dm 3 ZnCl2 electrolyte at (a) 2 mA
cm2, (b) 5 mA cm2, and (c) 10 mA cm2

Table 1. Variation of Mn content in Zn Mn coatings


with electrodeposition current density, at 70 C
Mn content, at.%
Current density, mA cm-2
2
0
3
28
4
25
5
31
8
29
10
10

Several selected samples, whose compositions were


determined with EDS and shown in Table 1, were studied
with SEM. The surface morphology of the three samples is
shown in Fig. 5. The SEM micrographs indicate that the
coatings deposited at low c.d. (2 and 5 mA cm 2 consist of
closely packed platelets, morphology typical for Zn coatings
deposited at low c.d.s. However, an increase in c.d. to 10
mA cm 2 resulted in the formation of the porous surface
with the germs of dendrites, probably as a consequence of
the more intensive basic electrolyte decomposition and the
evolution of hydrogen gas. Therefore, it may be inferred that
the optimal deposition c.d would be 3 mA cm 2, since upon
higher c.d. there is a negligible increase in Mn content, but
the surface morphology and appearance becomes less
acceptable for protective coatings.

From the practical point of view, one may conclude that the
change in deposition c.d. may not be a parameter which will
significantly vary the chemical composition of Zn Mn
electrodeposits from ChCl urea DES, as it was usually the
case in water electrolytes [12]. However, the positive result
is certainly the fact that in DES, as shown in Table 1, a quite
high Mn content (25
30%) may be achieved at low
deposition c.d., which is usually not feasible in water
solutions [12].

4. CONCLUSION
Cyclic voltammograms recorded in choline chloride urea
deep eutectic solvent, with or without Zn2+ and Mn2+ ions,
show that there is no observable Mn reduction peak when
there is only Mn2+ in DES solution. The clear Mn peak
develops upon addition of Zn2+ to the solution, probably due
to Zn ions previous nucleation on the steel substrate. The
EDS analysis of the ZnMn deposits obtained with current
densities of 2 10 mA cm2, reveals that Mn content is in
the range of 25 30 at.%, and it does not increase
significantly with deposition current density. The
explanation for such interesting observation was suggested.
At 3 mA cm2, a ZnMn coating of dense and homogeneous
appearance may be formed, having 28 at.% Mn, which is
considerably higher than typical Mn content in Zn-Mn
coatings deposited from aqueous electrolytes.

References
[1] Abbott, A., Capper, G., Davies, D., Rasheed, R., Ionic
liquid analogues formed from hydrated metal salts,
Chemistry - A European Journal, 10 (2004) 3769
3774.
[2] Abbott, A., Capper, G., Mckenzie, K., Ryder, K.,
Electrodeposition of zinc-tin alloys from deep eutectic
solvents based on choline chloride, Journal of
559

ELECTRODEPOSITIONOFMETALCOATINGSFROMEUTECTICTYPEIONICLIQUID

[3]

[4]

[5]

[6]

[7]

Electroanalytical Chemistry, 599 (2007) 288294.


Yang, H., Guo, X., Chen, X., Wang, S., Wu, G., Ding,
W., Birbilis, N., On the electrodeposition of nickelzinc alloys from a eutectic-based ionic liquid,
Electrochimica Acta, 63 (2012) 131138.
Fashu, S., Gu, C., Zhang, J., Zheng, H., Wang, X., Tu,
J., Electrodeposition, morphology, composition, and
corrosion performance of Zn-Mn coatings from a deep
eutectic solvent, Journal of Materials Engineering and
Performance, 24 (2015) 434444.
Bozzini, B., Gianoncelli, A., Kaulich, B., Mele, C.,
Prasciolu, M., Kiskinova, M., Electrodeposition of
manganese oxide from eutectic urea/choline chloride
ionic liquid: An in situ study based on soft X-ray
spectromicroscopy and visible reflectivity, Journal of
Power Sources 211 (2012) 71-76.
Chung, P., Cantwell, P., Wilcox, G., Critchlow, G.,
Electrodeposition of zinc-manganese alloy coatings
from ionic liquid electrolytes, Transactions of the
Institute of Metal Finishing, 86 (2008) 211219.
Gomez, E., Valles, E., Platinum electrodeposition in an
ionic liquid analogue. Solvent stability Monitoring,

OTEH2016

International Journal of Electrochemical Science, 8


(2013) 14431458.
[8] Guo, J., Guo, X., Wang, S., Zhang, Z., Dong, J., Peng,
L., Ding, W., Effects of glycine and current density on
the mechanism of electrodeposition, composition and
properties of Ni-Mn films prepared in ionic liquid,
Applied Surface Science, 365 (2016) 3137.
[9] Vargas, R., Borrs, C., Mostany, J., Scharifker, B.,
Kinetics of surface reactions on rotating disk
electrodes, Electrochimica Acta, 80 (2012) 326333.
[10] Drai, D., Vorkapi, L., Inhibitory effects of
manganeous, cadmium and zinc ions on hydrogen
evolution reaction and corrosion of iron in sulphuric
acid solutions, Corrosion Science, 18 (1978) 907910.
[11] Sathiyanarayanan, S., Jeyaprabha, C., Muralidharan,
S., Venkatachari, G., Inhibition of iron corrosion in 0.5
M sulphuric acid by metal cations, Applied Surface
Science, 252 (2006) 81078112.
[12] Popov, K., Djoki, S., Grgur, B., Fundamental aspects
of electrometallurgy, Kluwer Academic Publishers,
New York, 2002.

560

IMPACT OF THE ALTERED TEXTURE OF THE ACTIVE FILLING OF


THE FILTER ON THE SORPTIVE CHARACTERISTICS WITH
THE SPECIAL REFERENCE TO THE EFFECIENCY OF FILTERING
MARINA ILI
Military Technical Institute, Belgrade, marinailic1970@gmail.com
ELJKO SENI
Military Technical Institute, Belgrade, zsenic1@gmail.com
VLADIMIR PETROVI
TRAYAL Corporation, Kruevac, vladimirpetrovic66@yahoo.com
BILJANA MIHAJLOVI
TRAYAL Corporation, Kruevac, aktivni.ugljevi@gmail.com
VUKICA GRKOVI
TRAYAL Corporation, Kruevac, vukica.g30@gmail.com

Abstract In accordance with the defined requirements of the SRPS 8748, SRPS EN 14387 standard and the existing
equipment, this work shows the results of the comparative tests of sorptive properties of the Filter 3 and the Filter 4
with the standard recipe and the altered recipe for the impregnation of activated carbon, upon phosgene, chloropicrin,
ammonia, sulphurdioxide and chlorine challenge at different concentrations, relative humidity and flow of the inlet gas
mixture for the standard quantity of active filling of the Filter 3.
Key words- active filling, impregnation of activated carbon, filters, sorptive properties of the filters and the efficiency of
filtering.
the industrial plants, impose new requirements for the
protection of people, i.e. the expansion of the existing
requirements for the protection provided by the Filter.

1. INTRODUCTION
The Filter or the Combined filter is a filtering device for
respiratory protection which, together with the protective
mask, makes up a complex device for the protection of
eyes, face and respiratory organs against CBRN
contamination in the form of gases, vapours, solid and
liquid particles of aerosols.

Many toxic chemicals used for the industrial purposes,


due to their availability and low cost on the market, can
be used nowadays as chemical agents for the paralysis of
the system or for causing the general threat to the people,
with short-term effects or long-term effects.

The design of the Filter has been made in cooperation


between Military Technical Institute-Belgrade and
TRAYAL CorporationKrusevac, with the aim to fulfil
the most up-to-date requirements from the domain of the
CBRN protection. Owing to the design of the inner
filling, the Filter can be used for the protection against the
following contaminants:
for dust, areosols and smoke (anti-aerosol Filter),
for vapours and gases (anti-vapour Filter),

Consequently, besides providing the CBRN protection


against clearly defined poison gases, the Filter also serves
to protect from toxic chemicals and industrial gases,
especially from ammonia, sulphur dioxide and chlorine.

in the way that the removal of the harmful substances is


performed by means of filtration or sorption. These are
two different phenomena both in the substances being
removed from the stream of contaminated air and by the
methods and media of the removal.

On the other hand, the requirements for the environmental


protection bring restrictions regarding the use of
chemicals being used for the impregnation of activated
carbon in the Filter. For this reason, the salts of trivalent
chrome are not used in many countries of Europe and
worldwide due to their toxicity for humans and for the
environment, regardless of the fact whether or not they

This fact entails the necessity to clearly define the gases


of the afore mentioned group of compounds that the Filter
protects from and also to define the time of protection of
the Filter in various conditions, for the standard quantity
of active filling.

The latest world trends, more frequent terrorist attacks or


other potential forms of diversions as well as accidents at
561

IMPACTOFTHEALTEREDTEXTUREOFTHEACTIVEFILLINGOFTHEFILTERONTHESORPTIVECHARACTERISTICS

are signatories of the Regulation on the Registration,


Evaluation, Authorisation and Restriction of the
Chemicals (REACH).

chloropicrin challenge, at different concentrations,


relative humidity and flow of the inlet gas mixture.
2.1.2. he sequence of activities of mthods of testing of
the time of sorption properties of the Filter 3 and the
Filter 4 upon phosgene challenge

For this reason, this work emphasizes the change of the


texture of the antivapour filling, i.e. the fixed layer of
impregnated activated carbon for the Filter 3 and the
Filter 4 for the purpose of testing the sorptive
properties of the Filter against:
Current contaminants from the NBC group of
contaminants,
Toxic chemicals that can be used as contaminants in
the afore mentioned situations.

The flow of the mixture of air and phosgene is to be set


until the preset concentration has been reached. The time
is to be measured from the moment of introduction of the
mixture of air and phosgene into the filter till the change
in colour of the indicator paper.
The measured time indicates the time of protection of the
filter against phosgene.

The activated carbon has been produced from the


carbonized coconut shell of precisely defined quality, by
the activation of water vapour in the Static furnace under
precisely prescribed conditions of the process. It is
characterized by a high starting value of activity,
expressed by the iodine number ranging from 1300 to
1440 mg/g and by high mechanical hardness, which are
two basic prerequisites for the quality of the impregnated
activated carbons. The produced activated carbon is
brought to the prescribed granulometric volume by
graining and sifting.

2.1.3. he sequence of activities of mthods of testing of


the time of sorption properties of the Filter 3 and the
Filter 4 upon chloropicrin challenge
The flow of the mixture of air and chloropicrin is to be set
until the preset concentration has been reached. The time
is to be measured from the moment of introduction of the
mixture of air and chloropicrin into the filter till the
change in colour of the indicator paper.
The measured time indicates the time of protection of the
filter against chloropicrin.

The process of impregnation of the activated carbon is


conducted in accordance with precisely defined
parameters and conditions of application of impregnation
compounds, and it is strictly monitored in each stage of
the production, according to the prescribed monitoring
procedures. After the thermal processing, the impregnated
carbon is sifted to be dust-free and brought to quality in
accordance with prescribed technical characteristics.

2.2. thods of testing of the time of sorption


properties of the Filter 3 and the Filter M4
upon ammonia, sulphurdioxide and chlorine gas
challenge, according to the SRPS EN 14387/13
2.2.1. Principle
The principle of the method consists of the monitoring
and comparative analysis of the the time of sorption
properties of the 3 Filter and of the 4 Filter, with
27mm height of active fill, upon three test gases from the
group toxic chemical compounds, ammonia, sulphur
dioxide
and
chlorine
gas
challenge,atdifferent
concentrations, relative humidity and flow of the inlet gas
mixture.

The two batches of the impregnated activated carbon have


been producedthe batch of the activated carbon
impregnated with the salts of copper, chrome and silver,
according to the standard recipe for making of Filter 3,
and the batch of the activated carbon impregnated with
the salts of Cu2+, Ag+ and Zn2+, instead of Cr6+ salt
according to the altered recipe for the making of the Filter
4 . All the Filters used for the purpose of testing the
sorptive properties, filled with batches of the impregnated
activated carbon of both standard and altered
impregnation, have been produced according to the
standard technology for the making of Filter 3.

2.2.2. he sequence of activities of mthods of testing of


the time of sorption properties of the Filter 3 and the
Filter 4 upon mmonia challenge
The flow of the mixture of air and ammonia is to be set
until the preset concentration has been reached. The time
is to be measured from the moment of introduction of the
mixture of air and ammonia into the filter till the change
in colour of the indicator paper.

2. XPERIMENTAL PART
The testing methods that used for Filter M3 testing are in
accordance two standards SORS 8748/04 and SRPS EN
1438713
2.1.

OEH2016

The time that has been measured indicates the time of


protection of the filter against ammonia.
2.2.3. he sequence of activities of mthods of testing of
the time of sorption properties of the Filter 3 and the
Filter 4 upon sulphurdioxide challenge

thods of testing of the time of sorption


properties of the Filter 3 and the Filter 4 upon
phosgene and chloropicrin challenge, according to
the SORS 8748/04

The flow of the mixture of air and sulphur dioxide is to be


set until the preset concentration has been reached. The
time is to be measured from the moment of introduction
of the mixture of air and sulphur dioxide into the filter till
the change in colour of the indicator solution.

2.1.1. Principle
The principle of the method consists of the monitoring
and comparative analysis of the time of sorption
properties of the Filter 3 and of the Filter 4, with
27mm
height of active fill, upon phosgene and

The time that has been measured indicates time of


protection of the filter against sulphurdioxide.
562

OEH2016

IMPACTOFTHEALTEREDTEXTUREOFTHEACTIVEFILLINGOFTHEFILTERONTHESORPTIVECHARACTERISTICS

Filtration efficiency on paraffin aerosol mist

2.2.4. he sequence of activities of mthods of testing of


the time of sorption properties of the Filter 3 and the
Filter 4 upon chlorine gas challenge

(according to SORS 8829/05)

and the range of obtained results are shown in Table 2.

The flow of the mixture of air and chlorine gas is to be set


until the preset concentration has been reached. The time
is to be measured from the moment of introduction of the
mixture of air and chlorine gas into the filter till the
change in colour of the indicator paper.

able 2. The range obtained results on tested parameters


of weight, inhalation resistance and filtration efficiency
for all tested Filters
Filter type

The measured time indicates the time of protection of the


filter against chlorine gas.

In the aim of quality control of active fillfull for Filters


M3 and M4 two different produced batches of carbon
were sampled for testing. The first batch is impregnated
carbon, according to standard recipe, for activated carbon
used in Filter M3, with Cu2+, Cr6+ and Ag+ salts. The
second one is impregnated with Zn2+ salt , which serve as
a substitute for Cr6+ salt. Therefore standard recipe for
Filter M4 was changed.

3.1.

Table 1. Phisical-mechanical properties of impregnated


activated carbons

Moisture content
[%]
Apparent density
[g/l]
Mechanical strength
[%]
Grain size
distribution
> 1.6 mm
>1.4 mm
0.7-1.4 mm
0.5-0.7 mm
<0.5 mm

1.0

2.4

536

552

71

74

0.10 %
2.61 %
95.01 %
1.89 %
0.18 %

0.00 %
0.46 %
99.29 %
0.13 %
0.01 %

110-3
110-3
. 310-3

The sorption properties of Filter M3 and M4


upon phosgene and chloropicrin challenge

All the Filters M3 and M4 that succesfully passed the


quality control on previous testing phase, were tested with
CBRN gases. Wide range of conditions (inlet
concentrations, flow rates of gases and relative humidity)
under which Filters are tested, gives assurance that
results obtained can be taken as representative for diverse
exploitation conditions.

The results of testing are given in Table 1.

Changed
impregnation

Filtration
efficiency
[%]

The tested Filters M3 and M4 satisfied all requred


paramethers, that are shown in Tabels 1 and 2, what was
main condition for gass testing to be continued.
Comparative testing for both type of Filters on above
mentioned test gases, were done with the aim to
justificate the idea to change the impregnation of the
active charge, in order to expand the existing requirement
for the protection provided by the Filter M3.

The samples from two above mentioned batches were


tested on the following quality parameters:-moisture
content (according to SORS 8830/05),
apparent density (acording to SORS 8830/05),
mechanical strength (according to SORS 8830/05),
grain size distribution (according to SORS 8830/05),

Standard
impregnation

Inhalation
resistance
[Pa]

Filter M3 294.6-301
122-138
Filter M4 294.8-301.1 116-134
Required
. 310
. 140.0
values

3. RESULTS AND DISCUSSION

Activated carbon
type

Weight
[g]

The results of comparative testing of Filters M3 and M4


with phosgene and chloropicrin are shown in Tables 3, 4,
5 and 6.
3.1.1. Sorption properties of Filters M3 and M4 upon
phosgene challenge
Filter M3 and M4 were tested with phosgene, under the
following conditions :
inlet concentration of gas mixtures: 1000 ppm, 3500
ppm and 5000 ppm
flow rate of gas mixtures: 30 l/min and 64 l/min
relative humidity: 25 % and 80 %.
The results of protection time for Filter M3 and M4 upon
phosgene challenge, under different concentrations, flows
and humidity are shown in Table 3 and 4.

Filters M3 and M4, that were tested on CBRN gases and


toxic chemicals, were produced according to the standard
producing technology for Filter M3, with 27mm height
of active fill.

Table 3. The protection time of Filters M3 and M4


against phosgene at flow rate of 30 l/min for different
inlet concentration and humidity

Before testing, the Filters M3 and M4, were controled


according to the quality demands for following
parameters:
Filter weight (according to SORS 8829/05),
Inhalation resistance (according to SORS 8829/05)

Inlet
Relative Protection time Protection time
concentration humidity
[min]
[min]
[ppm]
[%]
Filter 3
Filter 4
1000
1000
563

25
25

111
110

107
108

OEH2016

IMPACTOFTHEALTEREDTEXTUREOFTHEACTIVEFILLINGOFTHEFILTERONTHESORPTIVECHARACTERISTICS

1000
1000
3500
3500
3500
3500
5000
5000
5000
5000

80
80
25
25
80
80
25
25
80
80

105
106
36
37
35
35
35
34
34
33

80 %), are not as high as is case in flow rate changes,


Fig.1.

102
104
33
35
30
32
28
29
25
25

Table 4. The protection time of Filters M3 and M4


against phosgene at flow rate of 64 l/min for different
inlet concentration and humiditiy
Inlet
Realtive
concentration humidity
[ppm]
[%]
1000
1000
1000
1000
3500
3500
3500
3500
5000
5000
5000
5000

2
25
80
80
25
25
80
80
25
25
80
80

Protection
time
[min]
Filter 3

Protection time
[min]
Filter 4

75
77
72
74
24
25
23
22
15
16
14
13

69
67
62
60
23
24
20
21
15
15
14
13

Figure 1. Influence of relative humidity and flow rate on


protection time upon phosgene challenge at constant inlet
concentration of 1000 ppm.

Figure 2. Influence of inlet concentrations and flow rates


on protection time upon phosgene challenge at constant
relative humidity of 25 %.
3.1.2. Sorption properties of Filters M3 and M4 upon
chloropicrin challenge
Filters M3 and M4 were tested to the chloropicrin effect,
under the following test conditions:
inlet concentration of gas mixture: 2200 ppm, 3500
ppm and 5000 ppm;
flow rate of gas mixture: 30 l/min and 64 l/min and
relative humidity:25 % and 80 %.

The differences between protection time upon phosgen


challenge for Filters M3 and M4 under varoius conditions
of inlet concentrations, flow rates, and humidities show
that standard impregnation (using Cr6+ salt) has some
advantages comparing to the changed one (Zn2+ salt),
Figures 1-2

The results of protection time for Filter M3 and M4 upon


chloropicrin challenge, under different concentrations,
flows and humidity, are shown in Table 5 and 6.

The explanation could be find in the fact that chromium


salts are used as catalyst in reactions with CBRN agents.
Most important reaction that explains the use of
chromium is oxidizing the CBRN agents and its
decomposition.[1]
The phosgene decomposition rate with Cr
considerably higher than rate with Zn 2+ ions. [2]

6+

Table 5. The protection time in Filters M3 and M4


against chloropicrin at flow rate of 30 l/min for different
inlet concentration and relative humidity

are

Inlet
Realtive
concentration humidity
[ppm]
[%]

The influence of phosgene flow rate on protection time is


considerably high, because it is directly proportional to
contact time between Filter adsorbens and test gas,
Fig.1.The differences in protection time for two examined
Filters are significant.

2200
2200
2200
2200
3500
3500
3500
3500

The other thing that confirm this claim of flow rate


influence, is that for higher inlet concentrations 3500 ppm
and 5000 ppm, and flow rate 64 l/min protection results
in examined Filters M3 and M4 are less divergent, Fig. 2.
The differences between results in two examined Filters
M3 an M4,caused by relative humidity change (25 % and
564

25
25
80
80
25
25
80
80

Protection time
[min]
Filter 3

Protection
time
[min]
Filter 4

26
25
21
18
18
17
16
15

32
32
22
20
23
21
20
19

IMPACTOFTHEALTEREDTEXTUREOFTHEACTIVEFILLINGOFTHEFILTERONTHESORPTIVECHARACTERISTICS

5000
5000
5000
5000

25
25
80
80

14
14
11
10

OEH2016

19
18
14
13

Table 6. The protection time in Filters M3 and M4


against chloropicrin at flow rate of 64 l/min for different
inlet concentration and relative humidity
Inlet
concentration
[ppm]

Realtive
humidity
[%]

Protection time
[min]
Filter 3

Protection
time
[min]
Filter 4

2200
2200
2200
2200
3500
3500
3500
3500
5000
5000
5000
5000

25
25
80
80
25
25
80
80
25
25
80
80

10
11
7
7
6
6
5
4
breakthrough
breakthrough
breakthrough
breakthrough

12
11
10
11
9
8
7
7
5
5
4
4

Figure 4. Influence of inlet concentration and flow rate


change on protection time of phosgene at constant
relative humidity 25 %
Based on all tested results, general impresion is that both
Filters M3 and M4, with standard and changed recipe
satisfy main quality demands.
Both Filters M3 and M4 protect against group of CBRN
gases (phosgene and chloropicrin).
3.2.

The sorption properties of Filter M3 and M4


upon amonia, sulphurdioxide and chlorine
challenge

The examination according to SRPS EN 1438 standard


for following toxic compounds: sulphurdioxide, amonia
and chlorine .

Protection time upon chloropicrin challenge of Filters M3


and M4, is slightly different. The Filter M4 has slightly
longer protection time (Figure 3).

Filters M3 and M4, were produced according to the


standard producing technology for Filter M3, with 27 mm
height of active fill. All the Filters satisfied weight,
inhalation resistance and efficiency on paraffin aerosol
mist quality demands as Filters used for testing on CBRN
contaminants.

The influence of relative humidity on adsorption of


chloropicrin is greater than in case of phosgene challenge.
Under the extreme conditions at flow rate of 64 l/min and
concentration of 5000 ppm, both M4 still have minor
protective, comparing to M3 which does not protect
against chloropicrin under this conditions (Figure 4).

The samples were tested to all above mentioned gases from


SRPS EN14387 under the following test conditions (4):
inlet concentration of gas mixtures: 5000 ppm
flow rate of gas mixtures: 30 l/min and 64 l/min
relative humidity: 70 %

Decomposition and adsorption of chloropicrin on


activated carbon is based on mechanism of physical
adsorption (on the interfacial area) and partly by chemical
decomposition of chloropicrin on impregnated carbon.[3]

able 7. Protection time for Filter M3 and Filter M4


against toxic compounds at flow rates 30 l/min and 64
l/min.

Figure3. Influence of relative humidity and flow rate


change on protection time on chloropicrin at constant
inlet concentration 2200 ppm

565

Inlet
concentration
[ppm]

Relative
humidity
[%]

5000
5000
5000
5000
5000
5000
5000
5000
5000
5000
5000
5000

70
70
70
70
70
70
70
70
70
70
70
70

Flow
Protection time
Test
rate
[min]
gas
[l/min]
Filter 3

30
30
64
64
30
30
64
64
30
30
64
64

NH3
1
NH3
1
NH3 breakthrough
NH3 breakthrough
SO2
22
SO2
22
SO2
10
SO2
8
Cl2
35
Cl2
36
Cl2
22
23
Cl2

Protection
time
[min)
Filter M4

23
22
7
8
10
10
5
4
30
31
20
22

IMPACTOFTHEALTEREDTEXTUREOFTHEACTIVEFILLINGOFTHEFILTERONTHESORPTIVECHARACTERISTICS

The obtained results are shown in Table 7. The


expectations for protection time for Filters of two
different types of impregnation are satisfied.

OEH2016

We have also observed that increasing the hight of the


filling of activated charcoal is an important factor of
protection time. This factor should be particularly
examined with the aim to check how an increase in active
fill, influence other properties such as weight and
inhalation resistance of Filters.

The results confirmed that both Filter types protect


against following toxic gases: sulphurdioxide and
chlorine, but the Filter M3 did not protect against
ammonia while the Filter M4 protects against ammonia.
Therefore, Filter M4 has slightly better protective
properties.

It is especially important that protection against ammonia


is provided by Filter M4. Ammonia is gas which is easily
accesible, and cheap which makes it eligible for system
paralyzing or provoking general danger against people.

4. CONCLUSION

Taking in account that Filter M4 provide protection


against CBRN contaminants, toxic chemicals including
the ammonia, we can reliably claim that it has some
andvatages in comparison to Filter M3.

The aim of this work was to analyze the protection time


of Filters M3 (loaded with Cr6+ impregnated activated
carbon) and M4 (loaded with Zn2+ impregnated activated
carbon) against several different CBRN gases as well as
toxic industrial gases under different concentration, flow
and relative humidity conditions, in order to examine the
validity of the idea to change standard impregnation of
activated carbon (using Cr6+ salt) with the impregnation
which uses Zn2+ salt.

References
[1] Adapted from Smisek, M. and Cerny, S., in Active
Carbon, Elsevier Publ. Co., Amsterdam, 1970; and
Jankowska, H., Swiatkowski, A., and Choma, J., in
Active Carbon, Ellis Howard, England, 1991. With
permission
[2] Letal Mists, Eric. R.Tylor, An Introduction to the
natural and Militar Sciences of Chemical, Biological
Warfare and Terorism
[3] Novel Carbon Adsorbent,edited by JuanM.D. Tascon
[4] SORS -8829:2005, Filter, Combined filter M3,
Requirements and methods testing
[5] SORS 8830:2005, Personal CBRN protect device,
Active carbon
KI M3 for combined filter,
Requirements and methods testing
[6] SORS 8748/04, Personal CBRN protect device,
Requirements and methods testing
[7] SRPS EN 14387:2013, Respiratory protective
devices- Gas filter(s) and combined filter(s)Requirements, testing, marking, Institute for
Standardization of Serbia.

Experiments performed have shown that there are


differences in the protection time of Filters M3 and M4
against test gases, which can be explained by the
differences in physisorption, chemisorption, hydrolysis as
decomposition of test gases on two types of activated
carbons, the standard one, with Cr6+ salt and the one that
contains Zn2+ salt.(3)
Protection time of tested Filters against phosgene
indicated that standard impregnation showed better
protective properties. On the other hand, the Filter M4
showed better protective properties against chloropicrin.
However, overall results proved that the two Filters M3
and M4, have met the most important quality demands,
despite the observed differences.
Both Filters were also examined against the group of
toxic industrial chemicals according to SRPS EN 14387
standard, and proved that satisfactory protection was
achieved for sulphurdioxide and chlorine.

566

INFLUENCE OF DAMAGED INJECTORS USED IN COMMON RAIL


SYSTEMS ON ECOLOGICAL AND ENEGRGY EFFICIENY
DEJAN JANKOVI
Tehnomotive ltd, Belgrade, dejan.jankovic@tehnomotive.com
MILETA RISTIVOJEVI
University of Belgrade-Mechanical Faculty, mristivojevic@mas.ac.bg.rs
DIMITRIJE KOSTI
Vinca Institute of Nuclear Sciences, Belgrade, dskostic@gmail.com

Abstract: As the beginning of the 21st century, expansion of common rail injection system was pushed over by more
strict ecological standards, especially the content of prohibited substances in exhaust gases of internal combustion
engine, so this topic is gaining in importance. Considering that fuel injectors are most prone to defects as elements of
common rail system, the influence of corrosion on the service life of fuel injectors is analyzed. The impact of such
damages on ecological and energy efficiency is presented. Also, this analyze covers the role of reparations in the
extension of the service life of injectors as no mechanical part, element, machine is not immune to abrasion. The repair
efficiency percentages were specified, taking into account specific operational mileages. Evaluation was carried out by
methodology for determination of critical contact surfaces wear tight tolerances as vital part of the injector assembly.
Keywords: injector, corrosion, reparation, ecology, energy.
Maintaining contact surfaces of the fuel injectors in tight
tolerances is very difficult and very fast during operating
period, due to sliding wear and aggressive behavior of
corrosion, there is a deviation from the tolerance limits.

1. INTRODUCTION
Consumer fever is the world imposed a high rate of
production, which results in the destruction of natural
resources and the deterioration of environment. Fuel
injection systems have a strong influence on the
ecological, economic and energy efficiency of diesel
engines. The development of these systems is constantly
improved, aided by strict requirements to reduce
environmental pollution and increase in efficiency.

The main cause of the injection system failure is damage


of fuel injectors, due adhesive wear of contact surfaces of
the valve set.
As a result, the increased gap values reduce both energy
efficiency and environmental efficiency of diesel engines.

The necessary resistance to corrosion of metal structures


can be achieved by studying and defining the basic laws
of the corrosion process, proper selection and
performance of the protection system. Also, processing
the corrosive environment, appropriate design and choice
of materials, on the basis of tests for resistance to
corrosion in different environments should be the subject
of studies.

2. REPARATION IN ORDER TO IMPROVE


THE ENERGY AND ENVIRONMENTAL
EFFICIENCY OF FUEL INJECTORS

In that sense, there is an increasing need for reparation.

Official estimates environmental and economic threat to


our planet is alarming. This vulnerability caused by small
and big countries, rich and poor, all in order to achieve an
appropriate profit. Developed countries give way to the
less developed countries, inadequate technology from an
environmental point of view, the so called dirty
technology. At the same time highly developed
countries, in addition to modern technologies and highquality filters, due to significantly higher production
volume contribute to more environmental and energy
threat to the planet [1].

Reparation as an alternative production is basically


economical than production. The value of the new fuel
injector is much higher than the value of repairing the old
one.

Technology reparation of mechanical parts and


assemblies, in addition to economic effects can be
achieved and significant ecological effects. The life cycle
of a product is shown on the picture no. 1.

Application of various metals and alloys in unfavorable


conditions, which can change during operation, requires
the use of different non-destructive and destructive
methods for testing and evaluation of resistance to
corrosion.

567

INFLUENCEOFDAMAGEDINJECTORSUSEDINCOMMONRAILSYSTEMSONECOLOGICALANDENERGYEFFICIENCY

OTEH2016

Picture 1. Life cycle of a product

3. DAMAGES TYPES

3.1. Corrosion

Within the injection system for vehicles of the new


generation of diesel engines is largely recognizable
common rail system, which in its system recognizes
almost all types of damage, such as:
Mechanical like corrosion, abrasion, adhesion,
pitting, etc.
Damage by radiation,
Chemical damage,
Electrochemical damage.

Corrosion is any failure that occurred because one of the


following corrosion phenomena: general corrosion,
pitting, crevice, galvanic, intergranular, stress corrosion
cracking,
microbiologically
induced
corrosion,
dezincification, erosion-corrosion, cavitation, hydrogen
sulfide and hydrogen induced corrosion, corrosion under
thermal insulation, dew point corrosion, stray current
corrosion, under deposit corrosion, over-heating
corrosion, chemical cleaning corrosion and welding
corrosion.

Damage usually occurs due to corrosion.

The importance of human factor in occurring of corrosion


failures resulted in development correct anti-corrosion
management which must be designed in such a manner
that it will increase the human potential in performance of
correct decisions.
Corrosion phenomena related to corrosion control and
corrosion monitoring similarities to other subjects,
namely, delivering anti-corrosion methods and services
faster, better, cheaper and newer. This is the quality of
anti-corrosion management. But, corrosion as a
phenomenon influences the quality of products, quality of
service and of course, influences our perception. If
corrosion is connected with quality, we may talk about
general anti-corrosion management quality which
includes design, performance (implementation), changes
(improvement), and control at all stages. Both corrosion
and quality are connected with reliability, availability and
profitability of any industry.

Picture 2. Corrosion in injector valve set


According to present data, losses caused by metal corrosion
and wear during operation period are about 6% of the initial
mass. Damage caused by corrosion can lead to emergency
disconnection of the injector, facilities, equipment, vehicles,
etc. which creates great material losses.

Anti-corrosion management is supervision at the design


stage for suitability of material selection, protective anticorrosion measures (corrosion control) and corrosion
monitoring. We can show how investments in corrosion
control affect the quality of goods, materials, work,
services, profitability, forecasted opportunities, customer
satisfaction, safety, and effectiveness [3].

Indirect cost of corrosion is connected with the ecological


impact on the environment, loss of expensive chemicals, a
contamination of technological streams by corrosion
products, loss of efficiency, overdesign and shutdowns.
The corrosion risk is related to environmental pollution by
hazardous chemicals, fuels, and gases, resulting in
possible fires and explosions, damage to people, animals,
plants, air, soil and water.

It is very important to know where the problem occurs.


Therefore is very important to analyze typical mistakes in
anti-corrosion management. Some of such mistakes:

568

INFLUENCEOFDAMAGEDINJECTORSUSEDINCOMMONRAILSYSTEMSONECOLOGICALANDENERGYEFFICIENCY

1.
2.
3.
4.
5.

6.

Improper design and selection of metallic


materials for attaining the required service life of
the equipment
Improper selection of preventive anti-corrosion
measures
Lack of insufficient use of corrosion monitoring
methods
Lack of damage assessment, analysis,
monitoring of failures and their detection.
Lack or insufficient corrosion education and
training. Most engineering and technical
personnel have little or no education and training
in corrosion science and engineering.
Improper use or not using the technical standards
and other documentation in the field of corrosion
control and monitoring.

OTEH2016

well as on the content of sulfur and water compounds.


Thus, the quality and type of fuel that feeds an engine
substantially affects the intensity of the processes being
discussed. Recently, fatty acid methyl esters (FAME)
have been popularized as biodiesels, mainly for ecological
reasons.

4.1. Failure analysis


The main cause of the injection system failure is damage
to the injectors, due to corrosion of contact surfaces of the
valve set.

In manufacturing, corrosion costs are incurred in the


product development cycle in several ways, beginning
with the materials, energy, labor, and technical expertise
required to produce a product. For example, a product can
require painting for corrosion protection. A corrosion
resistant metal can be chosen in place of plain carbon
steel, and technical services can be required to design and
install cathodic protection on a product. Additional heat
treatment can be needed to relieve stresses for protection
against stress-corrosion cracking.
Picture 3. Valve set of injectors

4. CORROSIVE WEAR OF FUEL INJECTOR


ELEMENTS USED IN COMMON RAIL
SYSTEMS

Sliding surfaces condition score in terms of wear:


Initially, acceptable (1)
On the border of acceptability (2)
Partly, unacceptable (3)
Intensively, unacceptable (4).

A common rail is one of the most important components


in a diesel and gasoline direct injection system. The main
difference between a direct and a standard injection is the
delivery of fuel and the way how this one mixes with
incoming air. In the direct injection system, the fuel is
directly injected into the combustion chamber, skipping
the waiting period in the air intake manifold. Controlled
by the electronic unit, the fuel is squirted directly where
the combustion chamber is hottest, which makes it burn
more evenly and thoroughly.
The majority of diesel engine problems stem from
contaminated fuel. Common problems include corrosion
from excessive water in the fuel, micro fine particles in
the fuel and improper fuel storage, which is caused by
water in the fuel. There are two ways in which water can
get into the fuel: through the delivery system and through
the tank vent.
The process of wear can be defined as changes in the
injection system as a result of use and leading to a gradual
loss of functionality or permanent damage. Due to
exceptionally difficult operating conditions, the elements
of common rail systems being most prone to defects are
fuel injectors [6].

Picture 4. Failure analysis

Corrosion due to chemical or electrochemical


mechanisms plays an important role in causing defects in
fuel injection elements.

5. REPARATION
The technological process of reparation of every machine
has the appropriate specificity. Common technological
process of reparation is consisted of a series of operations.

The corrosive effects of diesel oil depends largely on


acidic oxygen complexes of natural origin or on the
ageing processes being taking place in the oil itself, as

The order of operations:


569

INFLUENCEOFDAMAGEDINJECTORSUSEDINCOMMONRAILSYSTEMSONECOLOGICALANDENERGYEFFICIENCY

OTEH2016

5.5. Technical documentation

Dismantling
Cleaning of samples
Damage analysis
Selection of repair methods
Cost-benefit analysis
Technical documentation
Development of technological process
Sample preparation
Implementation of the chosen repair procedure
Control

If a damaged part of a machine has no accompanying


technical documentation, it must be made on the basis of
measurements required geometry, kinematics and
mechanical characteristics.

5.6. Metallization
There is many different procedures of metallization which
differ according to the mode of operation of the device for
metallization, type of material and type of fuel gas used
during the procedure. Basically, these procedures can be
divided into cold and hot processes metallization.

5.1. Dismantling
In the process of dismantling the following should be
taken into account:
To avoid damaging parts assembly
To prevent additional damage to already damaged
machine part
Preserve the original damage.

Hot metallization can be carried out with application of


powder at a relatively cool surface and subsequent heating
surface and dissolving powder or with simultaneous
heating up the surface and applying a dissolving powder.
Hot process can metalize only material that have a
melting point above 1150C.

5.2. Cleaning of samples

Basic characteristics of cold metallization include powder


coating on the cold surface temperature of 250C. Cold
metallization can be performed by following procedures:
by gas, plasma, supersonic, gas-detonation process. Cold
process can be applied to all metals materials except pure
copper, glass, non-metals. Depending on the type of
metallization, materials that are applied may be in the
form of wire or powder.

In order to perceive the condition of damaged area, they


should be cleaned first. This operation involves removing
dirt from damaged areas, as well as their cleaning and
degreasing. The most common impurities are: remains of
oils and fats, metal particles, corrosion, soot, etc.
Quality repair technology depends significantly on the
quality of implementation of clean-up operations.
Therefore, this operation should be consistently
monitored.

5.7. Reparation of the injector valve shaft


Reparation of the injector valve shaft as the main cause of
cancellation of the injection system can be implemented
in a cold metallization in a vacuum (applying titanium
nitride layer thickness of a few micrometers).

5.3. Damage analysis


Analysis is carried out to determine the underlying cause
breakage or damage to mechanical parts. Cleaning of
damaged areas follows analysis of damages in respect of
type, quantity and location, conditions of production,
conditions of installation, operating conditions, structural
conditions based on technical documentation.
To carry out this analysis, it is necessary to collect data on
the damage level, tolerance of linear dimensions, shape
and position tolerances, surface roughness, cracks, both visible and hidden.

5.4. Reparation method


Methods of reparation will be selected based on the
analysis of the damage, in qualitative and quantitative
terms, type of general material and function of use of the
mechanical part.

Picture 6. Valve shaft before reparation, after reparation


by titanium nitride and the new one

6. CONCLUSION

Method of reparation can be mechanical (cold) and warm.


Cold repair is dominant in repairing of mechanical parts
or producing damaged parts by stainless steel. The latest
research has focuses on the development of the
technological procedure for the application of additional
materials which do not require large heat input which is
achieved by cold metallization.

The material selection is a point that shouldnt be


underestimated. Good
mechanical
properties ensure
strength and prevent corrosion. The common rail for
diesel engine is made of steel, while the common rail for
gasoline engine is made of stainless steel, because the fuel
is too corrosive and stainless steel possesses better
resistance to corrosion than steel.

570

INFLUENCEOFDAMAGEDINJECTORSUSEDINCOMMONRAILSYSTEMSONECOLOGICALANDENERGYEFFICIENCY

The quality of the common rail is of critical importance.


Forging has the biggest contribution to the efficient
prevention of eventual components failures.

[2]

Forging and in particular hot forging strengthens the


material by closing empty spaces within the metal while
deforming and shaping it with localized compressive
forces. A forged common rail is stronger and possesses
higher resistance to pressure and corrosion.

[3]

The results of a number of analysis show that corrosion is


a factor significantly affecting the failure frequency of
common rail systems [7].

[4]

Also, the need for reparation within the injection system


exists. It becomes a competition to spare parts, which is
excellent and the only real way for ecological, economic
and stimulate energy efficiency of a diesel engine to
maintain the desired limits.

[5]
[6]

The impact of fuel quality must be solved with new


facilities for distillation of oil. Abrasion is a process that
is inevitable because it depends on a number of
influencing factors. It may be more or less intense, so the
reparation due to wear is inevitable. If some methods for
protection against wear is used, that the repair process is
only moved on some interval of time.

[7]

References
[1] Ristivojevic M., Odanovic Z., Stamenic Z., Lazovic
T., Repair in function of economy, energy efficiency
and ecology, reparation in the function of economic,

571

OTEH2016

energy efficiency, Technique-Engineering no.2 (in


Serbian), year 58, 2008.
Groysman A., Brodsky N., Corrosion and Quality,
Proceedings of the 15th International Conference of
the Israel Society for Quality, November 16-18,
Jerusalem, 2004, 223-230.
Groysman A., Anti - Corrosion Management,
Environment and Quality at the Oil Refining
Industry, Technion-Israel Institute of Technology,
2006.
Dejan Jankovic, Implementation design of
experiments to reparation of fuel injectors, OeTG
Symposium 2014.
Jankovic D., Ristivojevic M., Determining reliability
of fuel injectors in exploitation, Danubia 2012.
Knefel, T., Technical assessment of Common Rail
injectors on the ground of overflow bench tests.
Maintenance and Reliability, vol. 14, no. 1, (2012).
p. 42-53.
Franciszek Abramek K., Stoeck T., Osipowicz T.,
Statistical Evaluation of the Corrosive Wear of Fuel
Injector Elements Used in Common Rail Systems,
Strojniki vestnik - Journal of Mechanical
Engineering 61 (2015) 2, 91-98.

DEFECT DURING PRODUCTION OF STEEL CARTRIDGE CASE


NADA ILI
Military Technical Institute, Belgrade, nadalic67@gmail.com
LJUBICA RADOVI
Military Technical Institute, Belgrade, ljubica.radovic@vti.vs.rs

Abstract: During production of steel cartridge case, cracks or fracture of the work piece after 1st drawing were observed.
Additional control of the preforms (round discs), both visual and magnetic particles, revealed the presence of surface
cracks. Further metallurgical analysis of the metallographic specimens as well as fractured surface were applied (optical
and SEM/EDS). It was assumed that presence of slag and coarse inclusions had decisive influence on the slugs quality and
cartridge case failure during production.
Keywords: AISI 1030 steel, cartridge case, slag, inclusions.
cup shape with wall thinning (1st drawing), in some
specimens also observed cracks and some of them fractured.

1. INTRODUCTION
Conventional steel cartridge cases such as the 30mm are
produced of AISI 1030 carbon steel in the form of plate or
rod as a starting material [1]. This steel is a medium carbon
steel, and has moderate strength and hardness in the asrolled condition. It can be hardened and strengthened by
cold work. The formability of AISI 1030 carbon steel in the
annealed condition is good, what allows high reduction of
the wall thickness during manufacture of a cartridge case. It
also has fair machinability, ductility and good weldability,
[2-4].

For investigation ten preforms were supplied: five with and


five without visible cracks, together with two cups, one with
cracks and one fractured after 1st drawing.

3.2. Methods
Visual observation performed by eyes and magnifying glass,
while for magnetic particle inspection a hand magnetic yoke
used. Inspection was performed according to SRPS EN ISO
9934-1.

In the manufacture of cartridge cases various kinds of


defects can be present, related either to material
(microstructure, impurities) or technology parameters (load,
temperature, cooling rate, lubrication etc.). These defects
appear in the form of splits, seams, or decarburization on the
outer surfaces and can be deleterious to the manufacture of a
quality cartridge case [1, 5-6].

A transverse cross section of the preforms near the cracks


were taken and prepared for metallographic examination.
The content of the inclusions estimated according to SORS
1648. A 3% nital etchant used for revealing a
microstructure. Microstructure of the material examined by
light microscopy (LM), as well as scanning electron
microscopy (SEM-JSM 6610 LV).

The aim of this work was to evaluate the cause of the


cracking and failure during steel cartridge case
manufacturing in Sloboda Company aak..

Energy dispersive X-ray spectroscopy (EDS) was used in


conjunction with SEM to characterize the composition of'
the choosed area.

2. EXPERIMENTAL WORK

3. RESULTS AND DISCUSION

2.1. Material and technology

3.1. Visual examination

For the manufacture of the cartridge case of 30mm, slugs


made of hot-rolled plate of AISI 1030 steel were used. The
chemical composition of the steel verified by accredited
Laboratory. The slugs (round disc - thickness of 13.5 mm
and diameter of 78 mm) were subsequently annealed and
formed by pressing into performs (as shown in Figure 1), in
Sloboda Company.

Visual examination revealed the presence of the cracks on


the five of ten supplied preforms. The cracks observed on
both side of the preforms. Typical appearance of the
preforms with cracks is shown in Figure 1. In Figure 2 a cup
with crack on the inner side is shown. Crack also observed
at the bottom of the fractured cup, shown in Figure 3. The
cracks at the inner side were wider and more open. This
observation can be attributed to larger load, due to contact
between tool and workpiece or poor lubrication.

After this step visual inspection revealed the cracks on the


30 of 500 delivered slugs. During next step in
manufacturing process, when preform was formed into a
572

OTEH2016

DEFECTDURINGPRODUCTIONOFSTEELCARTRIDGECASE

Figure 3. Preform after magnetic particle inspection.

3.3. Metallographic examination


Microstructure of the cartridge case material, observed by
OM ans SEM is shown in Figure 4. Ferrite-perlite
microstructure with spheroidized perlite observed. This
microstructur is expected after applied thermo mechanical
treatment of AISI 1030 steel. On the other hand, a large
number of impurities were visible.
Optical microscopy revealed the presence large aluminates and
oxide inclusions (Figures 5 and 6). The content of the
inclusions was B+D=4. According to SORS 1648 the
maximum allowed inclusions content is: 2+(B+D)=2,5, i.e.
(+B+D)=3.5. Obviously the inclusions content was greater
than allowed. On the outer side of the preform 2, several deep
cracks were observed (Figure 2). Typical appearance on the
perpendicular section is shown in Figures 6a and b.
Microstructure around the cracks is decarbonized, with coarse
ferrite grain. Decarburization is more visible in SEM
micrograph (Figure 7). The similar microstructure was
observed around the crack on the cup shown on Figure 2 [7].

Figure 1. Preform with surface defects: a) outer side; b)


inner side.

Presence of the decarbonized layer and related softening


may indicate incorrect process of heat treatment [8].

Figure 2. Cup with defects.

3.2. Magnetic particle inspection


After magnetic particle inspection the cracks observed with
the naked eye became more visible. However, additional,
previously unidentified cracks were found in the other two
preforms, like as shown in Figure 3.

573

OTEH2016

DEFECTDURINGPRODUCTIONOFSTEELCARTRIDGECASE

Decarburized zone

Figure 4. Microstructure of the preform 2: a) OM;


b) SEM

Figure 7. Decarbonization in the crack area. Preform 2.


SEM

3.4. EDS analysis


The energy-dispersive x-ray analysis (EDS) of the preform,
in the vicinity of the craks, revealed the spectra which
consisted of elements associated with the type of steel under
investigation, such as Fe, S, and Si, as well as oxygen. This
indicates the presence of oxide inclusions (Figure 8 and 9).
Inside the crack has detected Al, Ca, Mg, Si, Mn, Fe. High
content of oxygen indicates the presence of oxide of these
elements (Al2O3, CaO, MgO, SiO2, MnOFeO) (Figure 8),
what corresponds to the slag composition. Slag inclusions,
usually oxides, sulfides or silicates, come from many
sources during the steel making process [9-10]. They have
irregular shapes and discontinuous nature, therefore serve as
a stress concentration and limit the ability of the material to
withstand the stresses. Thay have a negatively affect the
appearance and mechanical properties of final products. A
cleaner steel has better formability and is less prone to
fatigue and corrosion [10-11]. Crack resistance and
formability during deep drawing, depends of multiple
metallurgical factors. Some of these are: steel chemistry,
microstructure and cleanliness, strength level etc. [12-13].

Figure 5. Nonmetallic inclusions in the central part of the


preform 2

Decarbonized layer and grain growth observed close to slag


inclusions, and related softening, indicate incorrect process
of heat treatment. Also, crack initiation was related to slag
inclusions. It is assumed that lower cooling rate of slag
inclusions could lead to conditions for effective
decarburazitation. Due to decrease of strength and
thoughness initiated crack could have propageted easily.
Results of the microanalysis of the typical inclusions in the
preforms microstructure, far from the cracks, are shown in
Figure 10. It was shown that inclusions are mostly
aluminates, oxides and silicates (Al2O3, MnO and SiO2).

Figure 6. Microstructure around the cracks in the preform 2


574

OTEH2016

DEFECTDURINGPRODUCTIONOFSTEELCARTRIDGECASE

Figure 8. EDS analysis of preform specimen

Figure 10. EDS line profiles of the nonmetallic inclusions

4. CONCLUSION
Visual examination of the ten supplied preforms revealed
presence of the cracks on the five preforms, while magnetic
particle inspection revealed the presence of cracks on two
additional preforms.
Metallographic examination revealed the ferritic-perlite
microstructure of the preforms, with some spheroidized
perlite, what is in accordance with previous heat treatment.
The observed inclusions were mostly aluminate and oxide
type and their content was higher than allowed by SORS
1648. However, decarburized areas and the presence of the
slag were detected in the vicinity of the all analyzed cracks.
The presence of slag and coarse inclusions had decisive
influence on the preforms quality and cartridge case failure
during production.

Figure 9. EDS analysis of the area containing the slag


inclusions

575

OTEH2016

DEFECTDURINGPRODUCTIONOFSTEELCARTRIDGECASE

[7] Ili N., Radovi Lj., Testing report on slag and cup
made of R-3 for cartridge case T-4, Internal report
03/16, Military Technical Institute, Belgrade, 2016.
[8] ASM Metals Handbook, Vol. 4, Heat treatment,
ASM Publications, Metals Park Ohio, 1991.
[9] Zhang L. Thomas G. B., Evaluation and control of
steel cleanliness-Review, 85th Steelmaking
Conference Proceedings, ISS-AIME, Warrendale,
PA (2002), 431-452.
[10] Daniel H. Herring, Steel Cleanless: Inclusions in
Steel, 2009, Industrialheating.com, http://www.heattreat-doctor.com.
[11] Shannon N. G., Oxide inclusion behavior at the
steel/slag interface, ProQuest Dissertations And
Theses; Thesis (Ph.D.)-Carnegie Mellon University,
2007; Publication Number: AAI3281242; ISBN:
9780549226499; Source: Dissertation Abstracts
International, Volume: 68-09, Section: B, 6236; 126.
[12] ASTM Handbook. Vol. 11, Failure Analysis and
Prevention, Ed. by R.J. Shipley and W.T. Becker,
ASM Publications, Metals Park Ohio, 2002.
[13] Kushnir M. A., Statistical analysis of Steel
Formability, SUGI 30 Proceedings, Philadelphia,
2005, 166-30, www2.sas.com/proceedings/sugi30.

References
[1] Brey J. R., Havron A. L., Development of 30 mm
thinwall steel cartridge case, Contractor report arccdCR-93005, U.S. Army armament research,
development and engineering center, New Jersey,
1993.
[2] http://www.rolledmetalproducts.com/carbon-steel1030/
[3] http://www.rolledmetalproducts.com/carbon-steel1030/
[4] ASM Metals Handbook, Vol. 1, Properties and
Selection: Irons, Steels, and High-Performance
Alloys, Metals Park, Ohio, 1979.
[5] M. S. Abo-Elkhair, Failure Analysis of Cartridge
Cases Due to Manufacturing Stresses or Due to Fire
under Action of Internal Loading, IJMSE Vol. 3 (1)
(2013), 43-49.
[6] Research and Development of Materiel, Engineering
Design
Handbook,
Ammunition
Series
Section 6, Manufacture of Metal Components,
of Artillery Ammunition, Headquarters United States
Army, Material Command Washington, D. C. 20315,
1964.

576

FAILURE ANALYSIS OF THE STATOR BLADE


JELENA MARINKOVI
Military Technical Institute, Belgrade, jecamarinkovic@gmail.com
DUAN VRAARI
Military Technical Institute, Belgrade, vracaricd@gmail.com
LJUBICA RADOVI
Military Technical Institute, Belgrade, ljubica.radovic@vti.vs.rs
IVO BLAI
Military Technical Institute, Belgrade

Abstract: Materials used to make rotor and stator turbine blades are designed to be very durable and have a lifespan
from 25 till 40 years in very difficult conditions of working environment. However, the lifetime of blades and proper
functionality dramatically reduces the presence of abrasive reagents within the exploatation environment. Therefore,
the special types of stainless steel (with chromium content 11 12 wt.% and low nickel content 0.3 0.8 wt.%) are used
to make the blades. This paper presents an failure analysis of the stator blade made of special martensitic creepresisting steel (grade X22CrMoV12-1). The investigation of the fracture was conducted at the blade root by the macro
fractographic analysis on the stereo microscope and microfractographic analysis of the fracture surface on the
scanning electron microscope (SEM). Fractographic analysis showed that the fracture has occurred due to the
presence of corrosion pits and that cracks were propagated by the mechanism of corrosion fatigue.
Keywords: stator blade, failure, pitting corrosion, fractographic analysis.
steel X22CrMoV12-1 is widely used in manufacturing of
low pressure turbine casings and blades due to its superior
strength and corrosive properties which are suited for
steam turbine environmental conditions [12]. These steel
grades can tolerate NaOH, but the corrosion fatigue
fracture will occurre due to appearance of corrosion pits
and crack propagation through corrosion pits at very low
values of vibratory stress (less than 6,7 MPa) in a lot of
corrosive agents [5].

1. INTRODUCTION
The one of the most important role of the turbine blade in
a steam turbine is to convert the kinetic energy of the
steam into mechanical energy [1]. The fracture of the
blades is the most common cause of failure in steam
turbines, that is caused by the forces acting on the blade,
which are centrifugal forces, centrifugal bending, steady
steam bending, unsteady centrifugal forces due to lateral
shaft vibration and alternating bending [2, 3]. Working
condition of the turbine blade in a corrosion and abrasive
environment during operating also has a big contribution
to the blade failure [4-7]. Some of the previous
investigations reported that fatigue, stress corrosion
cracking and corrosion fatigue are general causes for the
blade failure [6, 8-10].

This study has been a result of the failure analysis of the


stator blade from the thermal power plant Kolubara,
Veliki Crljeni. The aim of this work was to identify the
cause of the stator blade failure and to determine factors
that may affected on the mechanism of crack initiation
and propagation.

In general, blade failures can be grouped into two


categories: (a) fatigue - including both high (HCF) and
low cycle fatigue (LCF); and (b) creep rupture [11]. It
should be noted that according to the experience of the
American Electric Power Research Institute (EPRI),
materials for turbine are very durable and can have
lifespan from 25 till 40 years in difficult working
conditions, but there are no turbines over a long period
can function in the presence of caustic soda, NaOH or
acid [5]. Under such influences, materials for discs or
rotors fail due to stress corrosion after a few hundred
hours in the stress less than 10 % of the material yield
strength. The 12 wt.% chromium martensitic stainless

2. EXPERIMENTAL WORK
2.1. Testing methods
The chemical composition of the stator's balde root
material was determined by optical emission spectrometry
(ARL 2460) acoording to SRPS C. A1.011 (2004).
The microstructures were characterized by a Leitz
Metalloplan optical microscope (OM). Metallographic
sample was prepared by using traditional grinding and
polishing techniques and Kallings etchant was used to
reveal the microstructure.
577

OTEH2016

FAILUREANALYSISOFTHESTATORBLADE

Hardness measurement was determined by using Vickers


hardness method (HV30). These measurement were
carried out on Wolpert Diatestor 2RC hardness tester
acoording to SRPS EN ISO 6507-1:2011. The hardness
was determined as the mean value of the results of five
measurements.
The analysis of the fracture surface were done by a stereo
microscope, as well as scanning electron microscope
(SEM-JEOL JSM 6610LV). The local chemical analysis
were carred out using energy dispersive spectroscopy
(SEM/EDS).

secondary crack

The sampling for hardness test (part 2B) and chemical


analysis (part "2C") was done according to Figure 1.

Figure 3. Secondary crack in the vicinity of the fracture


surface
Macro fractographic observation of the blade root using
stereo microscope, revealed typical features of fatigue
mechanism of fracture (Figure 2) and secondary crack in
the vicinity of the fracture surface (pointed with arrows
on the Figure 3). This crack connected several pits.
Further macro fractrographic analysis revealed severe
pitting corosion with deep pits (arrows on Figures 4 and
5).

Figure 1. Sampling for chemical analysis and hardness


testing

3. RESULTS AND DISCUSSION


3.1. Visual examination
The root of the stator blade shown in Figure 2, was
subjected to the visual examination.

Figure 4. Severe pitting corrosion at the blade root

Figure 2. The blade root

Figure 5. Arrows indicate the pits at the blade root


578

OTEH2016

FAILUREANALYSISOFTHESTATORBLADE

3.2. Chemical composition


The chemical composition of the stator blade materials is
given in Table 1.
Table 1. Chemical composition, wt.%
C Si Mn P
S
Cr Mo Ni Cu Al V
0,20 0,35 0,59 0,02 0,002 11,93 0,91 0,68 0,07 0,01 0,25
The results of chemical analysis show that the tested
stator blade was made of a steel grade X22CrMoV12-1.
3.3. Hardness
Hardness measurement was done on the cross section
sample (2B) taken from the blade root ( Figure 1).
The result has shown that the average hardness is 303
HV30. This value corresponds to the hardness values of
about 290 HB (ASTM E 140-07), which is in accordance
with the manufacturer's attest (292 HB ).

Figure 7. Secondary crack, growing by connecting


several pits
Figure 8 shows the final stage of fracture, static fracture,
which occurred in the condition of a one-time load. This
fracture area is small (of about 1 mm), which indicates
that the final fracture occurred at low stress.

3.4. Microstructure
The microstructural examination was carried out on a
sample taken from a part "2A" in the transverse (P)
direction (Figure 1). The observed microstructure is
shown in Figure 6.

Figure 8. The final stage of fracture


Chemical analysis the fatigue crack initiation site, shown
in Figure 9, revealed the presence of S, O and Cl, which
indicates that during the corrosion process sulphate and
chlorine ions were active.

Figure 6. Microstructure of X22CrMoV12-1 alloy of the


stator blade

At the fracture surface corrosion products were present,


which have significantly impeded fractographic analysis.

The martensitic structure with the presence of needlelike


beinite was observed, which is in accordance with the
chemical composition and temper condition of the used
steel.
3.5. Fractography
In Figure 7 a secondary crack on the blade root is shown.
It can be clearly seen that the crack was initiated in
corrosion pit and grown by connecting several pits.

a)
579

OTEH2016

FAILUREANALYSISOFTHESTATORBLADE

the intense pitting corrosion caused by presence of


chlorine and sulfate ions.
Fractographic analysis showed that the craks were
initiated due to the presence of corrosion pits and
propagated by the mechanism of corrosion fatigue.

ACKNOWLEDGEMENT
Authors would like to thank the company Kolubara,
Veliki Crljeni for being a part of its study The failure
analysis of the stator blade.

REFERENCES
[1] http://www.explainthatstuff.com/turbines.html
[2] Sohre S. J., Steam turbine blade failures, causes and
correction,
Proceedings
of
the
fourth
turbomachinery symposium, Texas A & M
University, Laboratories, 1978.
[3] Nurbanasari M., Abdurrachim, Crack of a first stage
blade in a steam turbine, Case Studies in
Engineering Failure Analysis, 2 (2) (2014) 5460.
[4] Jonas O., Turbine corrosion Steam turbine
corrosion, CORROSION/84 (Paper 55), Louisiana,
1984.
[5] ASM metals handbook: Corrosion, Vol. 13. 9th ed.
Materials Park, Ohio, 1987.
[6] Wang W.-Z., F.-Z. Xuan, Zhu K.-L., Tu S.-T.,
Failure analysis of the final stage blade in steam
turbine, Engineering Failure Analysis, 14 (4)
(2007) 632641.
[7] Jonas O., Machemer L., Steam turbine corrosion and
deposits problems and solutions, Proceedings of the
thirty-seventh turbomachinery symposium, Texas A
& M University, Laboratories (2008) 211228.
[8] Kubiak Sz J., Segura A., Gonzales G., Garca J.C.,
Sierra F., Nebradt J., Rodriguez J. A., Failure
analysis of the 350 MW steam turbine blade root,
Engineering Failure Analysis, 16 (4) (2009) 1270
1281.
[9] Ziegler D., Puccinelli M., Bergallo B., Picasso A.,
Investigation of turbine blade failure in a thermal
power plant, Case Studies in Engineering Failure
Analysis, 1 (3) (2013) 192199.
[10] Cuevas-Arteaga C., Rodriguez JA, Clemente CM,
Rodrguez JM, Mariaca Y, Pitting Corrosion
Damage for Prediction Useful Life of Geothermal
Turbine Blade, American Journal of Mechanical
Engineering, 2 (6) (2014) 164-168.
[11] Mehdi Tofighi Naeem, Seyed Ali Jazayeri, Nesa
Rezamahdi, Failure Analysis of Gas Turbine
Blades, Proceedings of The 2008 IAJC-IJME
International Conference (Paper 120, ENG 108)
2008.
[12] Booysen C., Heyns P.S., Hindley M.P., Scheepers
R., Fatigue life assessment of a low pressure steam
turbine blade during transient resonant conditions
using a probabilistic approach, International
Journal of Fatigue, 73 (2014) 17-26.

b)
Figure 9. EDS analysis in the pit: a) Spectrum;
b) analyzed site

4. DISCUSSION
Chemical composition, hardness and temper of the blade
root material correspond to the required manifactures
specification.
The blade was made of X22CrMoV12-1
Microstructure consists of martensit with beinite.

steel.

Macro fractographic analysis revealed two areas of


fatigue fracture, on the convex and on the concave side of
the blade root, with a small central part of the final stage
of fracture (residual ligament), Figure 2. The presence of
fatigue cracks on the two opposite side of the blade root
indicats the presence of alternating bending [2, 5, 7]. One
crack started in one side of the blade and progressed until
it met the second crack, which began in the opposite side
of the blade. The central part of the fracture surface (the
residual ligament) is small compared to the fracture
surface, which indicats that the fracture occurred at low
stress.
A significant pitting corrosion on the blade root surface
(very near to fractured surface), was caused by presence
of sulfur, chlorine and oxygen ions, which were detected.
It was the main reason for cracs initiation. The cracs were
grown by connecting pits mechanism. It is obvious that
this area (the blade root) is the most affected by both the
mechanical and corrosion factors for the crack initiation.
We emphasize that the recognized type of failure can occur
at very low values of vibration stress which are at the level of
10% of the yield point of the used material [5].

5. CONCLUSION
Chemical composition, hardness and microstructure of the
blade root material correspond to the used steel grade and
temper. Tested sample has martensitic structure with the
presence of needlelike beinite.
The chemical analysis of deposits on the fracture surface
(SEM/EDS), revealed the presence of chlorine, sulfur and
oxygen, which means that the processes that have led to
580

READOUT BEAM COUPLING STRATEGIES


FOR PLASMONIC CHEMICAL OR BIOLOGICAL SENSORS
ZORAN JAKI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
Belgrade, Serbia, jaksa@nanosys.ihtm.bg.ac.rs
MILE M. SMILJANI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
Belgrade, Serbia, smilce@nanosys.ihtm.bg.ac.rs
ARKO LAZI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
Belgrade, Serbia, zlazic@nanosys.ihtm.bg.ac.rs
DANA VASILJEVI RADOVI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
Belgrade, Serbia, dana@nanosys.ihtm.bg.ac.rs
MARKO OBRADOV
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
Belgrade, Serbia, marko.obradov@nanosys.ihtm.bg.ac.rs
DRAGAN TANASKOVI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
Belgrade, Serbia, dragant@nanosys.ihtm.bg.ac.rs
OLGA JAKI
Centre of Microelectronic Technologies, Institute of Chemistry, Technology and Metallurgy, University of Belgrade,
Belgrade, Serbia, olga@nanosys.ihtm.bg.ac.rs

Abstract: Plasmonic devices are among the most sensitive contemporary sensors of chemical or biological analytes, in some
cases even reaching single molecule sensitivity. These are refractometric devices making use of extreme light concentrations
and offering real-time, label-free operation. For their proper operation one needs efficient coupling between the
propagating interrogation beam and bound surface plasmons polaritons (SPP) whose wave vector is much larger. An
external prism in Kretchmann or Otto configuration can be used for such coupling, or some kind of diffraction coupler or
fiber-based endfire coupler. In this contribution we investigate theoretically and experimentally possibilities to fabricate
sensor structures that simultaneously exhibit plasmonic properties and ensure diffraction-based coupling. To this purpose
we fabricated different micrometer-sized two-dimensional metal-dielectric arrays that can match wave vectors of
propagating beams and of SPP and at the same time show plasmonic properties tunable by design. We investigated
structures functioning in reflection or in transmission mode, with gold or aluminum as the basic material and with deep
subwavelength details. Our structures can be made much more compact than the conventional ones, thus being convenient
for monolithic on-chip integration with light source and detector and offering a larger degree of design freedom for
multianalyte CORN sensing.
Keywords: CBRN sensing, plasmonic sensors, diffractive optical elements, metasurfaces.

1. INTRODUCTION

of use of plasmonics is the chemical, biochemical and


biological (CBB) sensing [4-6].

Plasmonics is a field of electromagnetics dealing with


evanescent waves propagating at metal-dielectric interfaces,
often in the context of nanocomposite materials. Plasmonic
structures ensure extreme localizations of electromagnetic
fields in subwavelength volumes, thus opening a path
toward many useful applications [1-3]. One of the key fields

Plasmonic sensors offer very high sensitivities, up to the


single molecule level [7]. They are refractometric devices,
mostly but not exclusively based on the propagation of
surface plasmons polaritons (SPP). The presence of analyte
in minute amounts in the zone of the strongest evanescent
field causes changes of refractive index at the metal581

OTEH2016

READOUTBEAMCOUPLINGSTRATEGIESFORPLASMONICCHEMICALORBIOLOGICALSENSORS

dielectric interface of the sensor. This change results in


modulation of the propagation of the evanescent wave
which is probed by an outside beam. The compactness,
simplicity, all-solid design and all-optical nature of such
sensors is what makes them useful for CBRN agent sensing
[8-10].

Other methods of coupling include the use of sharp nearfield probes like e.g. SNOM microscope tips and the
application of charged particle beams [2].
In this work we consider the possibility to modify the
geometry of extraordinary optical transmission arrays in
order to obtain an additional degree of design freedom. To
this purpose we utilize complex shapes of apertures instead
of the conventional square or circular ones. These shapes
are obtained as a superposition of two or more simple forms
and ensure the appearance of nonlocality effects due to field
localization at deep subwavelength level [19-20]. Thus we
are able to use our structures as couplers in the well-known
manner of diffraction gratings and simultaneously to tailor
their dispersion in a wide range.

Coupling between propagating modes (interrogation beam)


and bound evanescent modes (surface plasmon polariton or
some other kind of evanescent wave like e.g. Dyakonov
wave [11-12]) in a plasmonic element is a non-trivial task.
An evanescent wave will have large to very large wave
vector, which is the main reason behind its ability to localize
electromagnetic field into subwavelength volumes. At the
same frequency, a propagating wave will have much smaller
wave vector. In order for these two to couple, the two wave
vectors must match. This can be done by external means
[13].

We present here a theoretical, numerical and experimental


consideration of our nonlocal structures for plasmonic CBB
sensors. We analyze two batches of experimental arrays of
micrometer-sized apertures, one of them in aluminum and
the other one in gold.

The most often used structure for propagating-to-evanescent


wave coupling is the prism in Kretschmann or
Kretschmann-Raether configuration [14]. This is basically a
transparent prism placed on the surface of the plasmonic
sensor in attenuated total reflection configuration. Light
incident under certain angle is refracted under the critical
angle at the total internal refraction (TIR) surface in parallel
with the CBB sensor surface, thus ensuring coupling of the
propagating wave to surface plasmon polariton. A variation
of this method is the Otto configuration [15], where the TIR
side of the prism is divided from the plasmonic surface by a
lower refractive index material (i.e. there is a gap between
the two).

2. THEORY
Extraordinary optical transmission arrays (EOT) represent
ordered 2D arrays of apertures with subwavelength
dimensions in an optically opaque metal layer [14, 21, 22].
According to the conventional theory, no light should be
able to pass through them, since the apertures are too small
to permit any polarization to be transmitted. In reality, near
100% can pass through EOT at certain wavelengths.
Propagating beams are coupled with plasmon modes by
diffraction, and the much shorter wavelength of the latter
allows them to pass the subwavelength holes if resonant
conditions are satisfied.

Another method for efficient coupling between propagating


and evanescent modes is to place a diffractive structure at
the plasmonic surface. This may be a conventional
diffraction grating [2, 13], or some more complex periodic
structure. This includes the extraordinary optical
transmission apertures [16], basically two-dimensional
arrays of holes in an optically opaque metal layer. Another
geometry with similar function is the complementary
structure, an array of metal islands at the surface of a thin
dielectric film.

Pendry observed [23] that surface electromagnetic waves


can be formed in EOT films even at much larger
wavelengths than those at which plasmon resonance exists
and that their spectral dispersion has remarkable similarity
to the one obtained by Drude model. Thus electromagnetic
waves at metal surfaces with EOT holes are mimicking
surface plasmons polaritons such surface waves are
denoted as spoof plasmons. For a metal film with a
thickness h and square holes with a side a ordered in a
square photonic lattice with the side d the following
dispersion relation is obtained from the coupled mode
theory if the effective medium approximation is valid [24]:

Geometries of diffractive couplers can vary greatly.


Different 2D metal surface corrugations can be used,
including various pyramids, cubes, cylinders, etc.
Stochastically roughened metal surfaces [13] also belong to
diffractive couplers, since a random profile can be actually
represented as a spatial superposition of a large number of
ordered diffractive arrays with different unit cells. Such
couplers offer a relatively low efficiency, but are operational
in a wide frequency band, which makes them convenient for
white-light applications. The most complex case of
diffractive couplers are 3D plasmonic crystals and 3D
metamaterials [17, 18]

( ) tan

k = k0 1 + 1 a
h d

k0 h2 h ,

(1)

assuming that the EOT material is perfect electric


conductor. Here h denotes the metal film thickness, a is the
side of the square aperture and d is the square lattice
constant, k is the wave vector of the spoof plasmon, k0 is the
wave vector in free space, h is permittivity of dielectric
material filling the holes, d is the permittivity of the
ambient (for the above equation d = 1). By properly
choosing materials and geometry, one can tailor the spectral
dependence of spoof plasmons to cover any desired
wavelength range, regardless of the fact that surface
plasmons polaritons cannot exist in it.

Both prism and grating coupling belong to phase-matching


techniques. Wave vectors can be also coupled by spatial
mode matching, by the end-fire configuration [2], where the
propagating wave is guided to coincide with the plane of the
metal-dielectric plasmonic interface. A disadvantage of this
configuration is its rather low coupling efficiency when
standard fiber optics is used to guide the propagating beam
to the plasmonic structure.
582

OTEH2016

READOUTBEAMCOUPLINGSTRATEGIESFORPLASMONICCHEMICALORBIOLOGICALSENSORS

with the spectrum of a targeted analyte and thus fabricate a


target-specific device without any functionalization and
receptor layers. Thus one can obtain a simpler and more
compact device with improved selectivity and at the same
time with inherent ability to couple with the interrogating
beam without any external means.

In reality metals must be lossy, and their skin depth is

ds =

k0 Re m

),

(2)

where m is the dielectric permittivity of real metal.


In this case a generalized form of the dispersion relation for
spoof plasmons is valid [25]:

a k 2
k 2 = d k 02 + d 0 h [a + d s (1 + i )]
hd a

tan 2 k0 h

a)

b)

(3)

h
[a + d s (1 + i )]
a

Coupling of propagating modes to evanescent ones by an


EOT is defined by the lattice constant d. For an arbitrary
incident angle i and in the direction of a lattice side the
coupling angle is determined according to the well known
expression
m

= arcsin
sin i
d

c)

(4)

According to the effective medium approximation (EMA)


(e.g. [26]) used to derive (1) and (3) the exact shape of the
EOT aperture is unimportant as long as the effective amount
of metal and dielectric remains constant. The reason are the
subwavelength dimensions of the aperture, i.e. the fact that
the incident light is too short-sighted to discern any
details. Along one of the in-plane directions one can utilize
the simple mixing formula
m (d a) + h a
,
d

d)

Picture 1. Deep subwavelength modifications of square


EOT array in a square lattice. a) basic EOT; b) array with
edge patches; c) array with shifted overlapping squares
and d) array with shifted, non-overlapping squares.
Dashed lines denote the unit cell.

where m is the diffractive order and is the wavelength of


the incident propagating beam.

eff =

Picture 1 shows some simple cases of aperture shape


modifications. In Pic. 1a a simple EOT is shown with square
apertures in a square array. For simplicity, only four nearest
apertures are shown. The unit cell used for calculation of
spectral properties is shown by dashed lines. Pic. 1b shows a
composite shape where a smaller square patch is added to
each edge of the basic aperture. Pic. 1c shows the case when
the composite shape is obtained by simple overlapping of
two identical squares, and in Pic. 1d one can se a composite
unit cell consisting of two identical squares at a small
distance from each other.

(5)

while more complex EMA expressions like Maxwell-Garnet


or Bruggeman equation [27] essentially lead to the same
conclusion.

3. NUMERICAL
We used the finite elements method (FEM) to perform our
electromagnetic simulations of plasmonic properties of both
conventional and generalized EOT structures. We solved
Maxwell's equations with the boundary conditions defined
for our 2D aperture array. We utilized the RF module of the
Comsol Multiphysics software package.

In this work we consider a situation when additional deep


subwavelength modifications are introduced to the aperture
shape (composite aperture or super-unit cells) [19, 20, 28].
According to EMA, this should not cause any noticeable
changes in the spectral dispersion of the structure. In reality,
however, this is not the case [19]. On the contrary, the
introduced details may cause very high local field
concentrations, due to, for instance, proximity effect (two
shapes at very close distances are enhancing field in the gap
between them) or sharp tip effect (field concentration is
higher if pointed parts are sharper). If this is so, the
approximation of locality is no longer valid and the EMA
breaks apart. In this way one can tune the spectral behavior
without changing the diffractive properties necessary for
wave coupling.

As an illustration, picture 2 presents the results calculated


for a conventional EOT with square holes in a square lattice
in a 100 nm thick gold layer (dotted line) and for a
superstructure with deep subwavelength modifications.
Each lattice had a unit cell side 5 m and a hole side of 2.8
m. The superstructure was obtained by placing smaller
squares in corners of each square hole (corner patches), their
side being 1.4 m, the edge of the larger square coincident
with the center of the smaller, as shown in Picture 1b. This
is a simple case used for illustrative purposes. The real
subwavelength geometry may include any combination of
primitive patterns, as long as the composite apertures remain
arranged in the same 2D lattice.

Since one can shape the apertures in any form, in principle it


is possible to tailor the spectral dispersion to suit a desired
application. In the case of CBB sensors, this means that one
could be able to design a spectral dispersion to coincide
583

READOUTBEAMCOUPLINGSTRATEGIESFORPLASMONICCHEMICALORBIOLOGICALSENSORS

OTEH2016

The unit cell was defined by locating its center exactly in


the middle between four surrounding holes. In this way each
corner of the unit cell contains a quarter of a hole.
Simulations were done for a normal incidence of the
interrogating beam.
It can be seen in Picture 2 that the EOT with composite
apertures shows a much richer spectral behavior, with two
additional peaks that do not appear in the simple EOT, their
positions being dependent on the geometry of deep
subwavelength modifications. Our calculations for different
geometries (not shown here because of the paper length
restrictions) show richer spectra, sometimes even vastly so.

1.0

Reflection

Picture 3. Apertures in optically opaque gold film on


SiO2 substrate, 2 m diameter, 5 m side square unit cell

standard
EOT

0.8

0.6
compound
shape

0.4

10.4

10.6
10.8
Wavelength, m

11.0

Picture 2. Reflection coefficient of a simple EOT array


(dotted) and superstructure (solid)

4. EXPERIMENTAL
Picture 4. Subwavelength aperture array in gold film with
a gradient change in hole diameter

We used single-side polished n-type single crystalline


silicon 375 m thick substrates, <100> orientation, 2-5 cm
resistivity. In our first batch of samples 500 nm silica layer
was formed on the polished surface by thermal oxidation. A
30 nm chromium layer was further sputtered (binding
buffer) and a 100 nm gold layer was sputtered over the Cr
buffer. 500 nm thick positive photoresist AZ1505 was spincoated on top of the gold layer and patterns were laser
drawn using LaserWriter LW 405, spot size 2 m. After
resist removal, gold and chromium layers were removed by
isotropic wet etching.
The second batch was fabricated using the same Si wafers
with a 600 nm thick thermally oxidized silica layer. A 500
nm thick aluminum layer was sputtered over the thermal
oxide and an AZ1505 resist layer 500 nm thick was spin
coated over aluminum. Patterns were again defined using
LaserWriter, and Al layer was removed by isotropic wet
etching.
The surface morphology of our experimental samples was
characterized by atomic force microscopy in contactless
mode (Veeco Autoprobe CP-Research atomic force
microscope) and by a dark field metallurgical microscope.

Picture 5. Subwavelength hole pairs ensuring use of


nonlocality through deep subwavelength modification of
electromagnetic modes

584

READOUTBEAMCOUPLINGSTRATEGIESFORPLASMONICCHEMICALORBIOLOGICALSENSORS

OTEH2016

enhancement between two islands occur in the gap between


two neighboring islands.

Picture 3 shows the simplest structure in our experiments. It


is an array of circular apertures in a 100 nm thick gold film,
and the substrate is silica on silicon as described at the
beginning of this section. The diameter of the
subwavelength apertures is 2 m and the side of the square
unit cell is 5 m. Dark fields at the bottom of each hole
correspond to silica, while the lighter areas are gold.

Square apertures with 8 m sides were fabricated in


aluminum films 500 nm thick. The side of the square unit
cell was 24 m. Picture 6 shows microphotographs of two
experimental patterns observed on a dark field metallurgical
microscope. In one pattern (top picture) square apertures are
overlapping, being shifted along both axes of the square unit
cell by 4 m. The second pattern, shown on the bottom,
consists of two neighboring squares, where the distance
between the two closest edges in the vertical and the
horizontal direction is 2 m each.

Picture 4 shows an experimental sample of an array of


circular holes in gold film. Laser exposition was done with
variable intensity from left to right, so that the diameter of
the holes gradually changes. Thus a gradient structure is
obtained with spatially varying properties. This kind of
structures is of interest for coupling between propagating
and evanescent modes where the gradients ensure an
additional degree of freedom in transforming the optical
space [29, 30].

5. CONCLUSION
We developed and fabricated planar micrometer-sized
structures that are intended to simultaneously serve as a
platform for chemical or biological sensors and as a coupler
between propagating and evanescent modes. In this way a
general and tailorable tool was developed for further
fabrication of SPP CBB sensors. Electromagnetic
characterization of the fabricated structures does not belong
to the scope of the work and the results related to it will be
published elsewhere.
Our couplers utilize Pendrys concept of spoof plasmons in
two-dimensional arrays of subwavelength apertures in thin
metal films. To further tailor the spectral dispersion of the
obtained structures we utilized fine tuning of the position of
apertures and the issuing proximity effects, as well as the
field enhancement at sharp tips. Thus obtained nonlocality
effects ensure the possibility to produce structures with
desired transmission/reflection spectra that can be in
principle made to coincide with the spectrum of a particular
analyte. In this way one can obtain CBB sensors that do not
necessarily need surface functionalization by receptors. At
the same time we are still able to use our structures as
couplers in the well-known manner of the EOT arrays since
their diffractive properties are defined by the lattice layout,
and not by the shape of their apertures. A kind of hybrid
structure is thus obtained, retaining diffraction properties of
an EOT but having customizable dispersion. The spectral
range is not limited to the UV-visible range as in the
conventional surface plasmon polariton sensors, and can be
actually used at longer wavelengths, even reaching THz
range. In this way a path is open towards more compact
sensors that are rugged and more convenient for field use,
and at the same time offer an inherently increased
selectivity.

50 m

50 m

Picture 6. 2D arrays of composite unit cells made of


square apertures with identical dimensions in aluminum
film. Top: partially overlapping squares; bottom: pairs of
apertures with small gap

References
[1] Ozbay,E., Plasmonics: Merging Photonics and
Electronics at Nanoscale Dimensions, Science, 311
(2006) 189-193.
[2] Maier,S.A.,
Plasmonics:
Fundamentals
and
Applications, Springer Science+Business Media, New
York, 2007.
[3] Barnes,W.L., Dereux,A., Ebbesen, T. W., Surface
plasmon subwavelength optics, Nature, 424(6950)
(2003), 824-830.
[4] Anker,J.N., Hall,W.P., Lyandres,O., Shah,N.C.,

The case of aperture pairs with a nanometer-sized gap is


shown in Picture 5. As determined by AFM measurement,
the gap between the apertures in the top part of the picture is
about 300 nm at the narrowest point. In this way a
submicrometer bridge composed of gold is formed in the
middle of the hole pair. In our other experiments, not
presented here, we also fabricated complementary
structures, where gold islands 2 m in diameter were
ordered in 2D square array over the same substrate. In such
structures proximity effects leading to near field
585

READOUTBEAMCOUPLINGSTRATEGIESFORPLASMONICCHEMICALORBIOLOGICALSENSORS

Zhao,J., Van Duyne,R.P., Biosensing with plasmonic


nanosensors, Nature Mater., 7(6) (2008), 442-453.
[5] Abdulhalim, I., Zourob, M., Lakhtakia, A., Surface
plasmon resonance for biosensing: A mini-review,
Electromagnetics, 28(3) (2008), 214-242.
[6] Jaki, Z., Jaki, O., Djuri, Z., Kment, C., A
consideration of the use of metamaterials for sensing
applications: Field fluctuations and ultimate
performance, J. Opt. A-Pure Appl. Opt., 9(9) (2007),
S377-S384.
[7] Zijlstra, P., Paulo, P. M. R., Orrit, M., Optical
detection of single non-absorbing molecules using the
surface plasmon resonance of a gold nanorod, Nature
Nanotech., 7(6) (2012), 379-382.
[8] Vaseashta, A., Braman, E., Susmann, P., Technological
Innovations in Sensing and Detection of Chemical,
Biological, Radiological, Nuclear Threats and
Ecological Terrorism. Springer Dordrecht, 2012.
[9] Woodfin, R. L., Trace chemical sensing of explosives.
Wiley: Hoboken, 2007.
[10] Marshall, M., Oxley, J. C., Aspects of Explosives
Detection. Elsevier: Amsterdam, 2009.
[11] Dyakonov, M. I., New type of electromagnetic wave
propagating at an interface, Sov. Phys. JETP,
67(1988), 714-716.
[12] Vukovi, S. M., Miret, J. J., Zapata-Rodriguez, C. J.,
Jaki, Z., Oblique surface waves at an interface
between a metal-dielectric superlattice and an isotropic
dielectric, Physica Scripta, (T149) (2012), 014041.
[13] Raether, H., Surface plasmons on smooth and rough
surfaces and on gratings. Springer Verlag, BerlinHeidelberg, Germany, 1986.
[14] Kretschmann, E., Die Bestimmung optischer
Konstanten von Metallen durch Anregung von
Oberflchenplasmaschwingungen, Z. Phys. A Hadr.
Nucl., 241(4) (1971), 313-324.
[15] Otto, A., Excitation of nonradiative surface plasma
waves in silver by the method of frustrated total
reflection, Z. Phys., 216(4) (1968), 398-410.
[16] Ebbesen, T. W., Lezec, H. J., Ghaemi, H. F., Thio, T.,
Wolff, P. A., Extraordinary optical transmission
through sub-wavelength hole arrays, Nature, 391 667669 (1998)
[17] Vukovi, S. M., Jaki, Z., Shadrivov, I. V., Kivshar,
Y. S., Plasmonic crystal waveguides , Appl. Phys. A,
103(3) (2011), 615-617.
[18] Cai, W., Shalaev, V., Optical Metamaterials:
Fundamentals and Applications. Springer, Dordrecht,

OTEH2016

Germany, 2009.
[19] Jiang, Z. H., Yun, S., Lin, L., Bossard, J. A., Werner,
D. H., Mayer, T. S., Tailoring dispersion for
broadband low-loss optical metamaterials using deepsubwavelength inclusions, Scientific Reports,
3(2013).
[20] Tanaskovi, D., Jaki, Z., Obradov, M., Jaki, O.,
Super Unit Cells in Aperture-Based Metamaterials,
Journal of Nanomaterials, 2015(2015), 312064.
[21] Brolo, A. G., Gordon, R., Leathem, B., Kavanagh, K.
L., Surface Plasmon Sensor Based on the Enhanced
Light Transmission through Arrays of Nanoholes in
Gold Films, Langmuir, (2004) 20, 4813-4815.
[22] Brolo G., Plasmonics for Future Biosensors, Nature
Photonics, (2012), 6, 709-713.
[23] Pendry, J. B., Martn-Moreno, L., Garcia-Vidal, F. J.,
Mimicking surface plasmons with structured
surfaces, Science, 305(5685) (2004), 847-848.
[24] Ng, B., Terahertz sensing with spoof plasmon surfaces,
Ph.D. Dissertation, Imperial College, London,
England, 2014.
[25] Rusina, A., Durach,M., Stockman, M. I., Theory of
spoof plasmons in real metals, Appl Phys A, (2010),
100(2), 375-378.
[26] Koschny, T., Kafesaki, M., Economou, E. N.,
Soukoulis, C. M., Effective medium theory of lefthanded materials, Phys. Rev. Lett., 93(10) (2004),
107402-1-107402-4.
[27] Lakhtakia, A., Michel, B., Weiglhofer, W. S., The role
of anisotropy in the Maxwell Garnett and Bruggeman
formalisms for uniaxial particulate composite media,
J. Phys. D, 30(2) (1997), 230-240.
[28] Tanaskovi, D., Obradov, M., Jaki, O., Jaki, Z.,
Nonlocal effects in double fishnet metasurfaces
nanostructured at deep subwavelength level as a path
towards simultaneous sensing of multiple chemical
analytes, Photonics and Nanostructures
Fundamentals and Applications, (2016), 18, 3642.
[29] Leonhardt, U., Philbin, T. G., Transformation Optics
and the Geometry of Light. In Progress in Optics,
Wolf, E., Ed. Elsevier Science & Technology
Amsterdam, The Netherlands, 2009; Vol. 53, pp 69152.
[30] Dalarsson, M., Norgren, M., Asenov, T., Donov, N.,
Jaki, Z., Exact analytical solution for fields in
gradient index metamaterials with different loss factors
in negative and positive refractive index segments, J.
Nanophotonics, 7(1) (2013).

586

COMPLETE KINETIC PROFILING OF THE THREE NANOMOLAR


ACETYLCHOLINESTERASE INHIBITORS
MAJA VITOROVI-TODOROVI
Military Technical Institute, Belgrade, mvitod@chem.bg.ac.rs
MIRJANA JAKII
Military Technical Institute, Belgrade, jakisicmirjana@gmail.com
SONJA BAUK
Military Technical Institute, Belgrade, bauk.sonja@gmail.com
BRANKO DRAKULI
Center for Chemistry, IChTM, Belgrade, bdarkuli@chem.bg.ac.rs

Abstract: Acetylcholinesterase (EC 3.1.1.7) is an enzyme which terminates cholinergic neurotransmission, by


hydrolyzing acetylcholine at nerve and nerve-muscle junctions. It is well known target for the treatment of Alzheimers
disease and for the pretreatment of nerve agents intoxications. Recently, we developed 3D-QSAR model using an
alignment-independent descriptors based on the molecular interaction fields for the estimation of reversible inhibition
potency (IC50) for possible dual binding site ligands. Based on this model, three reversible ligands were designed and
synthesized, comprising tacrine unit and aroylacrylic acid amid scaffold, mutually linked by eight polymethylene units
linker. Their inhibition potency was determined by Ellman assay. Along with this, we estimated inhibition type and
corresponding inhibition constants (Ki1 and Ki2) for the three compounds. Putative noncovalent interactions of the
lingands with aminoacid residues in acetylcholinesterase active site gorge were estimated by docking calculations.
Keywords: acetylcholinesterase, reversible inhibition, dual-binding site ligands, inhibition type, inhibition constants,
docking calculations.

1. INTRODUCTION

pocket comprises two phenylalanine residues, 295 and


297, which interact with the substrate acyl group. They
form the clamps around methyl group of ACh, and
decrease its degrees of freedom [10]; (5) peripheral
anionic site (PAS) [11-13], comprises residues which are
located at the rim of the active site gorge, Tyr 72, Tyr
124, Trp 286 and Asp 74. Possible binding sites for the
reversible inhibitors comprise both AS and PAS. The socalled dimeric (dual) inhibitors bind simultaneously to
both of these sites.

Acetylcholinesterase (EC 3.1.1.7. AChE) is a


carboxylesterase which terminates cholinergic neurotransmission, by hydrolyzing neurotransmitter acetylcholine (ACh) in a synaptic cleft of nerve- and nervemuscle junctions. Among other applications, reversible
AChE inhibition is implicated in a pretreatment against
nerve agents intoxications [1,2].
AChE contains 20 deep and narrow gorge, in which
five regions that are involved in the substrate, irreversible
and reversible inhibitor binding, can be distinguished
(human AChE numbering): (1) catalytic triad residues:
Ser 203, His 447, and Glu 334, [3] which are found the
bottom of the gorge, directly participate in catalytic cycle,
by charge relay mechanism, as in other serine esterases;
(2) oxyanion hole is an arrangement of hydrogen bond
donors which stabilize the transient tetrahedral enzymesubstrate complex by accommodation of negatively
charged carbonyl oxygen. This region inside the active
center is formed by backbone -NH- groups of amino acid
residues Gly 121, Gly 122 and Ala 204 [4,5]; (3) the
anionic site(AS), where Trp 86 is situated. This residue
is conserved in all cholinesterases and it is involved in
orientation and stabilization of trimethylammonium group
of AChE, by forming cation- interactions [6-9]; (4) acyl

Recently we performed a 3D-QSAR analysis based on


alignment independent descriptors for the set of 110
structurally diverse, dual binding AChE inhibitors [14].
Based on the literature data as well as informations
obtained by detailed analysis of the most important
variables deduced from the models, we designed in silico
a set of dual compounds based on tacrine and aroylacrylic acid amides fragments, mutually linked by eight
methylene units. The previously obtained 3D-QSAR
models estimated low nanomolar inhibitory activity of the
compounds toward AChE. In this communication, we
synthesized these dual inhibitors and estimated their
inhibitory activity toward AChE and their kinetic
behavior.

587

COMPLETECHEMICALPROFILINGOFTHETHREEDUALNANOMOLARACETYLCHOLINESTERASEINHIBITORS

OTEH2016

aryl-4-oxo-2-butenoic acid amid (7 mmol) in choroform


(15 mL), equimolar amount of 4 and 15 mL of toluene
was added and the resulting mixture was stirred at room
temperature for 24 h. The solvent was removed under
reduced pressure, and obtained semi-solid substance was
purified by silica gel column chromatography (CHCl3 :
MeOH : NH4OH = 7 : 3 : 0,07).

2. MATERIALS AND METHODS


2.1. Chemistry
All chemicals were purchased from Sigma Aldrich or
Merck, and were used as received. The 1H and 13C NMR
spectra were recorded in CDCl3 on Varian Gemini 200/50
MHz or Bruker AVANCE 500/125MHz instruments.
Chemical shifts are reported in parts per million (ppm)
relative to tetramethylsilane (TMS) as internal standard.
Spin multiplicities are given as follows: s (singlet), d
(doublet), t (triplet), m (multiplet), or br (broad). The HRESI-MS spectra were recorded on Agilent Technologies
6210-1210 TOF-LC-ESI-MS instrument in positive
mode. Samples were dissolved in MeOH. The detailed
procedures for the synthesis of aroylacrylic acid amides
are given elsewhere [15, 16].

4-N-Diphenyl-4-oxo-2-[8-(1,2,3,4-tetrahydroacridin-9ylamino)octylamino]butanamid
(8):
C37H44N4O2,
reaction
of
(E)-4-oxo-4-phenyl-2-butenoic
acid
phenylamide (0,70 mmol) and equimolar amount of 4,
gave 8, in quantitative yield as orange semi-solid. 1H
NMR (200 MHz, CDCl3) : 1.01-1.20 (overlapped m, 8H,
linker-CH2-); 1.50 (br, 2H, linker-CH2-); 1.72 (br, 2H,
linker-CH2-); 2.51 (br, 4H, cyclohexyl-CH2-); 2.90 (br,
4H, cyclohexyl-CH2-); 3.33-3.40 (overlapped m, 7H,
linker-CH2- and ABX); 3.56-3.61 (m, 1H, ABX); 4.34 (m,
1H, ABX); 4.34 (m, 1H, ABX); 4.56 (s, amino-NH-);
4.89 (s, amino-NH-); 6.89-7.56 (overlapped m, tacrine-mphenyl, amido-p-phenyl, amido-m-phenyl, aroyl-p-phenyl
and aroyl-m-phenyl); 7.79-7.86 (overlapped m, 4H, aroylo-phenyl, amido-o-phenyl and tacrine-o-phenyl); 9.54 (s,
1H, amido-NH). 13C NMR (50 Hz, CDCl3) : 21.18;
22.23; 22.60; 24.35; 26.55; 26.84; 20.97; 29.90; 30.55;
31.37; 32.96; 41.57; 41.97; 43.05; 59.10; 61.38; 64.33;
67.15; 114.93; 119.13; 119.60; 122.03; 123.43; 124.34;
125.05; 126.67; 127.91; 128.78; 130.17; 135.99; 137.05;
137.56; 141.38; 142.14; 141.28; 151.10; 157.38; 170.15;
171.80; 196.49; 198.37. ESI-MS HR: 577.3529 (M +1),
Calc. 577.3537; 289.1807 (M+2), Calc. 289.1805.

Synthetic procedure for 9-chloro-1,2,3,4-tethydroaminoacrdine (3): To a mixture of anthranylic acid 1 (2.1


g, 15 mmol) and equimolar amount of cyclohexanone 2
was carefully added 16 mL of POCl3 at ice-bath. The
mixture was heated under reflux for 2h, then cooled at
room temperature, and concentrated under reduced
pressure to give slurry. The residue was diluted with
EtOAc, neutralized with ice-cold aqueous K2CO3, and
washed with brine. The organic layer was dried over
anhydrous K2CO3 and concentrated in vacuum to furnish
crude product, which was subsequently re-crystallized
from acetone to give a final compound. 1H NMR (200
MHz, CDCl3) : 1.83-1.96 (m, 4H, cyclohexyl-CH2-);
2.93 (t, 2H, J=5.63 Hz, cyclohexyl-CH2-); 3.08 (t, 2H,
J=6.35 Hz, cyclohexyl-CH2-); 7.47 (t, 1H, J=7.06 Hz, mphenyl); 7.62 (t, 1H, J=6,77 Hz, m-phenyl); 7.95 (t, 1H,
J=8.19 Hz, o-phenyl); 8.09 (t, 1H, J=7.06 Hz, m-phenyl).
13
C NMR (50 MHz, CDCl3) : 12.42; 22.45; 27.28; 33.96;
123.46; 125.16; 126.17; 128.49; 129.05; 146.50; 159.26.

4-(4-Isopropylphenyl)-4-oxo-N-phenyl-2-[8-(1,2,3,4tetrahydroacridine-9-ylamino)octyl-amino]butanamid (9): C40H50N4O2, reaction of (E)-4-(4isopropylphenyl)-4-oxo-2-butenoic acid phenylamide


(0,70 mmol) and equimolar amount of 4, gave 9, in
quantitative yield as orange semi-solid. 1H NMR (500
MHz, CDCl3) : 1.23-1.25 (overlapped m, 8H, d, 2H,
J=7.09 Hz, i-PrCH3); 1.31-1.37 (overlapped m, 8H, linker
-CH2-); 1.57 (m, 2H, linker-CH2-); 1.64 (m, 2H, linkerCH2-); 1.90 (m, 4H, cyclohexyl-CH2-); 2.69 (m, 4H,
cyclohexyl-CH2-); 2.95 (h, 1H, J1,2=6.59 Hz, J1,3=13.78
Hz, i-PrCH); 3.05 (br, 2H, linker-CH2-); 3.28 (dd, 1H,
J1,2=8.19 Hz, J1,3=17.38 Hz, ABX); 3.47 (t, 2H, J=7.39
Hz, amino-NH); 3.62 (dd, 1H, J1,2=3.20 Hz, J1,3=17.39
Hz, ABX); 3.70 (dd, 1H, J1,2=3.64 Hz, J1,3=8.71 Hz,
ABX); 7.09 (t, 1H, J=7.47 Hz, amido-p-phenyl); 7.297.34 (overlapped m, 6H, amido-m-phenyl, amido-ophenyl and tacrine-m-phenyl); 7.58 (d, 2H, J=7,64 Hz,
aroyl-m-phenyl); 7.91 (d, 2H, J=8.28 Hz, tacrine-ophenyl); 7.95 (d, 2H, J=8.28 Hz, aroyl-o-phenyl); 9.60 (s,
1H, amido-NH). 13C NMR (125 Hz, CDCl3) : 22.70;
22.96; 23.55; 24.71; 26.82; 27.08; 29.26; 30.15; 31.69;
33.86; 34.21; 40.04; 48.37; 49.44; 59.45; 115.72; 119.24;
120.12; 122.82; 123.52; 124.04; 126.77; 128.27; 128.41;
128.52; 128.94; 134.12; 137.73; 147.31; 150.80; 155.21;
158.30; 171.90; 196.18. ESI-MS HR: 310.2046 (M +2),
Calc. 310.2045.

Synthetic procedure for N-(1,2,3,4-tetrahidroacridine-9yl)-octane-1,8-diamine (4): The mixture of 3 (1g, 4.61


mmol) and 1,8-diaminooctane (1.99 g, 13.8 mmol) in 1-npethanol (5 mL) was refluxed for 18h at 160 C. The
mixture was cooled at room temperature and diluted with
EtOAc (50 mL). The solution was washed with 10%
NaOH aqueous solution, twice with distilled water, and
dried with anhydrous MgSO4. After the filtration of the
dried solution, the solvent was removed under reduced
pressure, and obtained semi-solid substance was purified
by silica gel column chromatography (CHCl3 : MeOH :
NH4OH = 7 : 3 : 0,07). 1H NMR (200 MHz, CDCl3) :
1.13-1.19 (overlapped m, linker-CH2-); 1.28 (br, 2H,
linker-CH2-); 1.47 (br, 2H, linker-CH2-); 1.74 (br, 2H,
cyclohexyl-CH2-); 2.52 (br, 2H, cyclohexyl-CH2-); 2.92
(br, 2H, linker-CH2-); 3.03 (br, 2H, amino-NH2-); 3.30
(br, 2H, linker-CH2-); 3.84 (br, 1H, amino-NH-); 7.18 (t,
1H, J=7.14 Hz, m-phenyl); 7.38 (t, 1H, J=7.52 Hz, mphenyl); 7.78-7.82 (overlapped m, 2H, o-fenil). 13C NMR
(50 MHz, CDCl3) : 22.31; 22.58; 24.31; 26.36; 28.80;
31.22; 32.66; 33.52; 41.39; 48.91; 115.26; 119.75;
122.42; 122.98; 127.67; 128.15; 146.99; 150.27; 157.87.

4-(3,4-Dimethylphenyl)-4-oxo-N-phenyl-2-[8-(1,2,3,4tetrahydroacridin-9-ylamino)octyl-amino]-butanamid
(10): C39H48N4O2, reaction of (E)-4-(3,4-dimetylphenyl)4-oxo-2-butenoic acid phenylamid (0,70 mmol) and

General synthetic procedure for the target compounds 810: To a mixture of correspondingly substituted (E)-4-

588

COMPLETECHEMICALPROFILINGOFTHETHREEDUALNANOMOLARACETYLCHOLINESTERASEINHIBITORS

equimolar amount of 4, gave 10, in quantitative yield as


orange semi-solid. 1H NMR (500 MHz, CDCl3) : 1.251.37 (overlapped m, 8H, linker-CH2-); 1.50 (m, 2H,
linker-CH2-); 1.64 (m, 2H, linker-CH2-); 1,90 (m, 4H,
cyclohexyl-CH2-); 2.29 (s, 3H, -CH3); 2.30 (s, 3H, -CH3);
2.69 (m, 4H, cyclohexyl-CH2-); 3.06 (m, 4H, linker-CH2); 3.26 (dd, 1H, J1,2=8.70 Hz, J1,3=17.40 Hz, ABX); 3.47
(t, 2H, J=7.15 Hz, amino-NH-); 3.62 (dd, 1H, J1,2=4.35
Hz, J1,3=17.39 Hz, ABX); 3.68 (dd, 1H, J1,2=4.35 Hz,
J1,3=8.70 Hz, ABX); 7.09 (t, 1H, J= 7.14 Hz, amido-pphenyl); 7.13-7.25 (overlapped m, 3H, amid-m-phenyl,
aroyl-m-phenyl); 7.32 (m, 2H, amid-o-phenyl); 7.53 (t,
1H, J=6.88 Hz, tacrine-m-phenyl); 7.59 (d, 1H, J=8.35
Hz, aroyl-o-phenyl); 7.70 (t, 1H, J=7.37 Hz, tacrine-mphenyl); 7.74 (s, 1H, aroyl-o-phenyl); 7.91 (d, 1H, J=8.35
Hz, tacrine-o-phenyl); 7.95 (d, 1H, J=7.86 Hz, tacrin-ophenyl); 9.59 (s, 1H, amido-NH). 13C NMR (125 Hz,
CDCl3) : 19.70; 20.00; 21.39; 22.70; 22.99; 24.71;
26.77; 27.09; 29.27; 30.16; 31.69; 33.87; 40.12; 48.38;
49.44; 59.50; 115.70; 119.23; 120.11; 122.79; 123.54;
124.03; 125.23; 125.88; 128.57; 128.94; 129.25; 129.91;
134.17; 137.05; 137.76; 143.25; 147.27; 150.82; 158.25;
171.95; 198.39. ESI-MS HR: 605.3830 (M +1), Calc.
605.3856; 303.1967 (M+2), Calc. 303.1967.

OTEH2016

amount of acetylcholinesterase and varying amounts of


the substrate (0.097 - 0.582 mM), in the absence or
presence of different inhibitor concentrations. The replots of the slopes and intercepts of the double reciprocal
plots against inhibitor concentrations gave the inhibitor
constants (Ki1 and Ki2, for the binding to free enzyme and
enzyme substrate complex) as the intercepts on the x-axis.

2.3. Docking calculations


The docking experiments were performed on the Mus
musculus AChE refined at 2.05 resolution (PDB entry:
2HA2). Truncated residues were properly completed by
means of SwissPDBViewer [17]. Hydrogen atoms were
added to the protein aminoacid residues, nonpolar-hydrogens
were merged, and Gesteiger charges were loaded using
AUTODOCK Tools [18]. The lowest energy conformations
of 8 were generated using Omega software [19]. The
compound 8 was docked to mAChE using AUTODOCK
4.0.1package [20]. Docking was carried out by using the
hybrid Lamarckian genetic algorithm, 5 runs were performed
with an initial population of 250 randomly placed individuals
and maximum number of 1.0 x 107 energy evaluations. All
other parameters were maintained at their default settings.
The lowest energy cluster returned by AUTODOCK was
used for further analysis.

2.2. Biological studies

3. RESULTS AND DISCUSSION

The inhibition potency of the compounds 8-10 toward


AChE was evaluated by Ellman procedure [28], using the
enzyme from electric eel (Electrophorus electricus),
typeVI-S (Sigma) and acetylthiocholine iodide (0.28 mM)
as substrate. Broad range of concentrations, which
produce 0-100% of enzyme activity inhibition, of the each
compound were used. The reaction took place in the final
volume of 2 mL of 0.1M potassiumphosphate buffer, pH
8.0, containing 0.03 units of AChE and 0.3mM 5,5dithiobis(2-nitrobenzoic)acid (DTNB), used to produce
yellow anion of 5-thio-2-nitrobenzoic acid in reaction
with thiocholine released byAChE. Test compound was
added to the enzyme solution and preincubated at 25 C
for 15 min, followed by the addition of DTNB (0.1 mL)
and substrate (0,05 mL). Inhibition curves were
performed at least in triplicate. One triplicate sample
without test compound was always present to yield 100%
of AChE activity. The reaction was monitored for 0.5
min, (absorbance was measured every 10 sec) and the
color production was measured at 412 nm. The reaction
rates were compared, and the percent of inhibition, due to
the presence of test compounds, was calculated. IC50
values by fitting the data into dose-response curves
(inhibitor concentration vs. velocity of enzyme reaction),
according to formula:

3.1. Chemistry
Synthetic path to compounds 8-10 is given in the Scheme 1.
The Niementowski reaction between 2-aminobenzoic
(anthranillic) acid and cyclohexanone proceeded smoothly to
give 3. The presence of 9-cloro substituent on the molecule
enabled further functionalization of the compound. The
nucleophilic aromatic substitution of 9-chloro substituent of
3, by using triple molar amount of linker (1,8diaminooctane) gave exclusively monomeric product 4.
Michael addition of 4 on correspondingly substituted
aroylacrylic acid amid gave target compounds 8-10.

From dose-response experiments we were able to


determine Hill coefficient and the existence of possible
cooperative effects during the process of inhibitor
binding.
The inhibition reaction was also monitored continuously
during 15 min after the initiation of the reaction in order
to determine the time interval which is needed to achieve
equilibrium of the reversible inhibition reaction.

Scheme 1. Synthetic path to compounds 8 (R1=R2=H) 9


(R1=4-i-Pr, R2=H) and 10 (R1=Me, R2=H).
Synthesized compounds were structurally characterized
by 1H and 13C NMR spectroscopy and by high resolution

Lineweaver-Burk plots were generated by using a fixed


589

OTEH2016

COMPLETECHEMICALPROFILINGOFTHETHREEDUALNANOMOLARACETYLCHOLINESTERASEINHIBITORS

mass spectrometry with electrospray ionization (ESI-HRMS). The 1H NMR spectra of 8-10 were represented on
the Figures 1-3. The peaks in the spectrum of 8, were
largely unresolved, since this spectrum is recorded on 200
MHz. However, in the spectra of compounds 9 and 10,
the so-called ABX multiplet is visible, which represents a
proof of Michaels addition of 4 to aroylacrylic acid amid
double bond. Those signals consist of three doublets of
doublets (that appear like quartets) which originate from
three chemically non-equivalent protons, the one that is
connected to stereocenter of the compounds, and the other
two which are linked to the neighbouring carbon atom.

Figure 5. 13C NMR spectrum of 9.

Figure 5. 13C NMR spectrum of 9.


Figure1. 1H NMR spectrum of 8.

3.2. Biological studies


Inhibition potency of the compounds 8-10 expressed as
IC50 values are given in Table 1, along with Hills
coeficients. The dose-response curves in Figures 7-9.
Table 1. The inhibition data for compounds 8-10.

Figure2. 1H NMR spectrum of 9.

Comp

R1

IC50 SEM (nM)

Hills coeff.

8
9
10

-H
-4-i-Pr
-3,4-diMe

7.76 0.34
15.61 1.12
5.24 0.41

1.09 0.08
1.05 0.08
1.24 0.16

From an insigt into Table 1. it can be seen that all tree


compounds are highly potent low nanomolar inhibitors of
AChE. The more potent inhibitors are compounds which
have no substitents on aroyl-phenyl ring (8) or have small
substituents such are methyl groups on aroyl-phenyl ring
(10). Compound 9, with more voluminous isopropyl
group have slightly higher IC50 value. It has been proven
in a number of studies that dual AChE inhibitors bind in
the way in which tacrine substructure is oriented toward
the bottom of the active site gorge, interacting with
aminoacid residues that belong to AS (Trp 86 and Tyr
337) while the other (usually aromatic and polycyclic)
fragment of the molecule is oriented toward the entrance
of the active site gorge and interacts with aminoacid
residues that belong to PAS (namely Trp 286, among
others). The finding that compound 9, exerts slightly
lower inhibition potency, comparing to compounds 8 and
10, was unexpected, since it is well known that PAS area
of the enzyme is wide and can accomodate wide variety
of highly voluminous molecular fragments. However, it is
possible that aroylacrylic acid amid fragment of 9,
bearing a 4-i-Pr substituent, experience some steric
hindrance upon binding to PAS and therfore is unable to
fully reach an AS of AChE and establish strong contacts
with amino acid residues at this site of the enzyme, which
results in slightly lower potency.

Figure 3. 1H NMR spectrum of 10.


The 1H NMR spectra of 8-10 were represented on the
Figures 4-6.

Figure 4. 13C NMR spectrum of 8.


590

OTEH2016

COMPLETECHEMICALPROFILINGOFTHETHREEDUALNANOMOLARACETYLCHOLINESTERASEINHIBITORS

enzyme activity was constant during the following of the


reaction (results not shown). Therfore, we can say that
three synthesized novel compounds (8, 9 and 10)
repersent true, fast reversible inhibitors of AChE.
The inspection of Hill coefficents in the Table 1, showed
that all three compounds have the value of those
coefficients around one, which indicates almost no
cooperativity in the ligand binding to four subunits of the
Electric eel AChE tetramer.
We also determined the inhibition type for the three
compounds by folowing the enzyme reaction on the
several different substrate concentrations, for the three or
two fixed inhibitors concentrations. The inhibition
constants and types are represented in Table 2, and
corresponding double-reciprocal Lineweaver-Burk plots
are given in Figures 10-12.

Figure 7. The dose response curve for the AChE


inhibition by compound 8.

From the inspection of the Lineweaver-Burk plots it can


be seen that compound 8 exert competitive inhibition,
which is indicated by the intersection of the lines on the
y-axis, meaning that compound binds solely to free
enzyme. On contrary, the other two compounds, 9 and 10
are noncompetitive inhibitors and bind both to free
enzyme and enzyme-substrate complex, which is evident
from the intersection of the lines in the upper left
quadrant. The change in the inhibition mode upon the
presence of the 4-i-Pr (as in 9) or 3,4-diMe (as in 10)
substituents from competitive to noncompetitive mode is
unusual and those effects should be inspected further in
more detail.
Table 2. The inhibition constants for the binding of free
enzyme (Ki1) and enzyme-substrate complex (Ki2) for the
8, 9 and 10, and correspondinh inhibition types.

Figure 8. The dose response curve for the AChE


inhibition by compound 9.

Comp

Ki1 (nM)

Ki2 (nM)

Inh. type

8
9
10

2.10
7.45
4.46

/
9.26
9.18

competitive
noncompetitive
noncompetitive

Figure 9. The dose response curve for the AChE


inhibition by compound 10.
Subsequently, we tested the velocity of equilibrium
establishment for the reversible inhibition by following
the resudual enzyme activity in longer period of time (for
15 min) after the initiation of inhibition reaction. The
equilibrium is reached almost instantly and residual

Figure 10. Lineweaver-Burk plot of AChE (0.03 U) in


the absence and in the presence of different concentration
of 8: pink dots - no inhibitor, red dots 3.47 nM, green
dots -8.67 nM and blue dots 13.88 nM.

591

COMPLETECHEMICALPROFILINGOFTHETHREEDUALNANOMOLARACETYLCHOLINESTERASEINHIBITORS

OTEH2016

Figure 11. Lineweaver-Burk plot of AChE (0.03 U) in


the absence and in the presence of different concentration
of 9: pink dots - no inhibitor, red dots 6.3 nM and green
dots 12.6 nM.
Figure 13. Compound 8 docked at the AChE active site.

4. CONCLUSIONS

Figure 12. Lineweaver-Burk plot of AChE (0.03 U) in


the absence and in the presence of different concentration
of 9: pink dots - no inhibitor, red dots 3.97 nM and
green dots 7.94 nM.

In summary, based on the previously published 3DQSAR model for the set of 110 dual-binding AChE
inhibitors, we designed in silico several heterodimeric
inhibitors of AChE based on the tacrine and aroylacrylic
acid amid scaffolds mutually linked by eight methylene
units. We synthesized the three proposed compounds and
tested their inhibition potency toward AChE. According
to IC50 values, compound 10, with 3,4-diMe groups on
aroylphenyl ring, seemed to be the most potent inhibitor,
however, further estimation of inhibition constants and
types proved that the derivative 8, unsubstituted at
aroylphenyl ring was the most potent compound with Ki1
value of 2.10 nM. We have also shown that compound 8
was competitive inhibitor, while two other compounds
acted as noncompetitive inhibitors of AChE. The possible
interactions between compound 8 and amino acid residues
in the AChE active site gorge were estimated by docking
calculations.

3.3. Docking studies

References

The docking pose for the compound 8 is shown on the


Figure 13. The tacrine fragment of the molecule is
accommodated at the bottom of the active site gorge and
forms stacking interactions with Trp 86. The aroylacrylic
acid amid fragment is oriented toward the entrance of the
gorge. Aroyl carbonyl group of the molecule forms close
contacts with Trp 286, a residue belonging to PAS of the
enzyme. The three putative hydrogen bonds were found:
the one between hydrogen of the linker NH group
(situated at the bottom of the gorge) and oxygen of the
Tyr 341 OH group, the second between nitrogen of the
linker NH amino group (situated at the bottom of the
gorge) and Tyr 124 OH group and the third between the
linker NH amino group (situated at the entrance of the
gorge) and the backbone carbonyl oxygen of Tyr 341. The
overall mode of binding for 8 is similar to that found for
other heterodimeric tacrine AChE inhibitors.

[1] Petroianiu,G.A., Arafat,K., Schmitt,A., Hassan,M.Y.:


J. Appl. Tox. 25 (2005) 60-67.
[2] Meshulam,Y., Cohen,G., Chapman,S., Alhalai,D.,
Levy, A.: J. Appl. Tox. 21 (2001) S75-78.
[3] Gibney,G.,
Camp,S.,
Dionne,M.,
MacPheeQuigley,K., Taylor,P.: Proc. Natl. Acad. Sci. U.S.A.
87 (1990) 7564-7550.
[4] Ekholm,M., Konschin,H.: J. Mol. Struct., Teochem,
467 (1990) 161-172.
[5] Ordentlich,A., Barak,D., Kronman,C., Ariel,N.,
Segall,Y., Velan,B., Shafferman,A.: J. Biol. Chem.
273 (1998) 19509-19517.
[6] Kreienkamp,H.J., Weise,C., Raba,R., Aaviksaar,A.,
Hucko,F.: Proc. Natl. Acad. Sci. U.S.A. 88 (1991)
6117-6121.

592

COMPLETECHEMICALPROFILINGOFTHETHREEDUALNANOMOLARACETYLCHOLINESTERASEINHIBITORS

[7] Weise,C., Kreienkamp,H.J., Raba,R., Pedak,A.,


Aaviksaar,A., Hucko,F.: EMBO J. 9 (1990) 38853888.
[8] Sussman,J.L., Harel,M., Frolow,F., Oefner,C.,
Goldman,A., Toker,L., Silman,I.: Science 253 (1991)
872-879.
[9] Ordentlich,A., Barak,D., Kronman,C., Flashner,U.,
Leitner,M., Segall,Y., Ariel,N., Cohen,S., Velan,B.,
Shaferman,A.: J. Biol. Chem. 268 (1993) 1708317095.
[10] Velom,D.C., Radi,Z., Ying,L., Pickering,N.A.,
Camp,S., Taylor,P.: Biochemistry 32 (1993) 12-17.
[11] P. Taylor, S. Lappi, Biochemistry, 14 (1975) 19891997.
[12] Barak,D., Kronman,C., Ordentlich,A., Naomi,A.,
Bromberg,A., Marcus,D., Lazar,A., Velan,B.,
Shafferman,A.: J. Biol. Chem. 269 (1994) 62966305.

OTEH2016

[13] Bourne,Y., Taylor,P., Radi,Z., Marchot,P.: EMBO J.


22 (2003) 1-12.
[14] Vitorovi-Todorovi,M.,
Cvijeti,I.,
Jurani,I.,
Drakuli,B.: J. Mol. Graph. Mod, 38 (2012) 194210.
[15] Vitorovi-Todorovi,M., Jurani,I., Mandi,Lj.,
Drakuli,B.: Bioorg. Med. Chem. 18 (2010) 11811193.
[16] Vitorovi-Todorovi,M., Koukoulitsa,C., Jurani,I.,
Mandi,Lj., Drakuli,B.: Eur. J. Med. Chem. 81
(2014) 158-175.\
[17] Guex,N., Peitsch,M.C.: Electrophoresis 1997, 18,
2714; http://www.expasy.org/spdv
[18] Sanner,M.F.: J. Mol. Graph. Modell. 1999, 1757;
AUTODOCK Tools 1.5.4.
[19] Bostrm,J.: J. Comput. Aided Mol. Des. 2002,
15,1137.
[20] Morris,G.M. et al.: J.Comput.Chem. 1998, 19,1639.

593

DEPENDANCE OF CBRN INSULATING MATERIALS PROTECTION


TIME UPON BUTYL-RUBBER AND FLAME RETARDANT CONTENT
VUKICA GRKOVI
Trayal Korporacija, Kruevac, fzstrayal@gmail.com
VLADIMIR PETROVI
Trayal Korporacija, Kruevac, fzstrayal@gmail.com
ELJKO SENI
Military technical institute, Belgrade, zsenic1@gmail.com
MAJA VITOROVI-TODOROVI
Military technical institute, Belgrade, mvitod@chem.bg.ac.rs
Abstract: Insulating type materials are most common materials incorporated in protective CBRN suits, especially when
one needs protection from high-level concentrations of chemical warfare agents, or in the case when decontamination
is performed, then insulating type CBRN suits are the best options for such task. CBRN insulating materials are usually
made of polyamide or polyester fabric coated with butyl-rubber layer of precise thickness or some other kind of
material such as polyurethane. Protection time, usually determined by sulfur mustard drop challenge, is the main which
defines the protection power of such materials. However, insulating CBRN materials often have to meet other demands,
such as flammability resistance. Addition of flame retardants to butyl-rubber materials usually leads to decrease in
their protective properties. Therefore, during the product development process, a butyl-rubber and flame retardant
content need to be finely tuned to achieve both good protective properties combined with flame resistance. In current
work, we investigated the dependence of protection time (determined by sulfur mustard drop challenge, vapor detection,
according to SORS 6701/04) upon butyl-rubber and flame retardant content, along with the oxygen consumption index
for the several insulating materials made of polyamide fabric coated with butyl-rubber layer of different thickness.
Keywords: CBRN-suits, sulfur mustard, protection time, butyl-rubber, flammability resistance.
of post-use decontamination procedures and adequate
disposal are needed. Therfore, protective clothes made of
insulating materials are recommended for use when
protection against high level concentrations of CWA is
needed or when special tasks are performed such as
decontamination. CBRN insulating materials are usually
made of polyamide or polyester fabric coated with butylrubber layer of precise thickness or some other kind of
material such as polyurethane. Protection time, usually
determined by sulphur mustard drop challenge, is the
main property which defines the protection power of such
materials. However, insulating CBRN materials often
have to meet other demands, such as flammability
resistance. Addition of flame retardants to butyl-rubber
materials usually leads to decrease in their protective
properties. Therefore, during the product development
process, a butyl-rubber and flame retardant content need
to be finely tuned to achieve both good protective
properties combined with flame resistance. In current
work, we investigated the dependence of protection time
(determined by sulphur mustard drop challenge, vapour
detection, according to SORS 6701/04 [5]) upon butylrubber and flame retardant content, along with the oxygen
consumption index (determined according to JUS
Z.C8.023 standard, [6]) for the several insulating
materials made of polyamide fabric coated with butylrubber layer of different thickness.

1. INTRODUCTION
Chemical contamination can be manifested through the
use of chemical warfare agents (CWAs) in military
actions, in case of accidents and also in terrorist attacks.
There are also increased risks of chemical contamination
by pesticides, widely used toxic chemicals. In the case of
CWA contamination adequate measures of respiratory
and percutaneous protection should be applied which
usually involves employment of face-masks, respirators,
protective suits and overgarments. Adequate protective
material represents compromise between acceptable
protective properties (impermeability for toxic substances
in vapour, aerosol and liquid state) and physiological
suitability [1-4]. The protective materials can be assorted
in two main groups: a filtration type made of thin-layer
activated carbon liners (such as Saratoga) and those made
of insulating rubber materials. Although protective gear
made of insulating materials affords excellent protection
against different kinds of toxic chemicals (especially
CWAs), it has some major drawbacks. Some of them are
bulk, increased weight and, consequently, the lack of
breathability, which induces heat stress in soldiers and
other users. Moreover, this kind of material represents
only a physical barrier against chemical agents: the toxic
chemicals are retained within the material, so further steps

594

DEPENDANCEOFCBRNINSULATINGMATERIALSPROTECTIONTIMEUPONBUTYLRUBBERANDFLAMERETARDANTCONTENT

OTEH2016

benzene (1 l) a benzoyl-chloride (140 g) is added


dropwise while stirring at room temperature. After the
completion of the addition, the mixture is refluxed at 70
C for 2h. The mixture is then cooled to a room
temperature, and obtained white prcipitate is separated by
filtration. The crude product is recrystalized from water.

2. MATERIALS AND METHODS


2.1. Protective material samples
All the tested protective materials were produced in
Trayal corporation, Kruevac, Serbia. The protective
materials were obtained by lining the polyamide fabric
material with three types of polymers: BS butyl rubber
with no addition of flame retardants, B-25 mixture
consisting of 100 g of butyl rubber-268, 60 g of zincborate, 30 g of antimony-(III)oxide and 30 g of
polyvinylchloride, and H-75 mixture consisting of 100 g
of hypalone (a synthetic rubber, chlorosulfonated
polyethylene, CSPE), 120 g of Martinal A (aluminum
(III) oxide trihydrate), 20 g of antimony (III) oxide and 40
g of zinc-borate. Combining these three types of
polymers, the eight types of samples were produced.
Samples 1 and 2 contained only BS mixture (of different
thickness) without any flame retardant. Sample 3
contained three layers B-25 and one layer of H-75.
Sample 4 contained four layers of B-25 (2 layers on both
sides of polyamide fabric) and two layers of H-75 (one
layer per side of polyamide fabric). Samples 5, 6, and 7
contained BS mixture on both sides of polyamide fabric
and 2, 1 and 4 layers of H-75 mixtures respectively. the
sample 8 contained BS-25 mixture on both sides, and
three layers of H-25 mixture. The description of the
samples is given in the Table 1.

Synthesis of N-chlorobenzoyl-o-toluidide: To a mixture of


NaOH (64 g) and distilled water (600 ml), cooled at 5-10
C, a Cl2 gas was introduced until pH 8.5 is achieved.
After that, a NaHCO3 (40 g) was added and stirred until it
was completely dissolved. After that, a suspension of
benzoyl-o-toluidide (17 g) in EtOH (300 mL) is added
and the resulting mixture was stirred for 2h at 20 C. The
obtained product is separated by filtration and
recrystallized from water.
The quality and sensitivity of indicator gauze is tested
above specifically designed glass vessel filled with
appropriate amount of sulphur mustard. The indicator
gauze is placed on the top of the open vessel and if
prepared indicator gauze has necessary sensitivity, change
in the colour from red to blue should take place in time
interval of 45 s.
The circular shaped material samples (d= 40 mm) and the
indicator gauze of the same dimensions and shape are
placed in apparatus shown on Figure 1. Five drops (with
volume of 1.5 L) of sulphur mustard were placed on
each sample. Immediately after, samples were sealed
using glass leads and paraffin. The time was measured
until the first colour change from red to blue was
observed.

2.2. Determination of protection time


The protection time of the insulating materials containing
different amount of butyl-rubber and hypalone as a flame
retardant are determined according to SORS 6701/04
standard. The method consists of the application of laying
drops of sulphur mustard on the tested material, their
penetration through tested material and detection of
sulphur mustard drops by reaction of suitably
impregnated indicator gauze, found beneath the tested
material, which change the colour from red to blue due to
released hydrochloric acid in the reaction. The material
samples and indicator gauze are placed in sealed glass
apparatus, shown in Figure 1.

2.3. Determination of oxygen index


The oxygen index was measured according to ASTMD2863-70 standard. This method determines the relative
flammability of polymer materials by measurement of
minimal necessary O2 concentration in its mixture with N2
(with graduate increase in O2 concentration), which is
needed to support the burning of the material. Therefore
the oxygen index represents the minimal oxygen
concentration expressed as volume percent of O2 in the
mixture with N2 which supports the burning of the
examined material.
The experiment is performed in glass column of internal
diameter of 75 mm and height of 450 mm. The bearing of
the column is made of non-flammable material in which
O2 and N2 are mixed and evenly distributed, after which
they enter the column. The bearing of the column is filled
with glass beads (d = 3-5 mm) and filled it in height of
80-100 mm. Samples are placed in a sample holder, found
at the centre of the column, in a vertical position. The
lower end of the sample is immersed into glass beads and
fixed in this way. The commercial O2 and N2 are used. If
an air is used, it should be previously purified and dried.
The control of the gas flow in every gas line is achieved
using rotameters or needle valves. The samples of the
examined material must have following dimensions:
length of 70-150 mm, width of 6.5 0.5 mm and
thickness 3.0 0.5 mm. The examined samples of the
materials must have flat sides and edges; the roughness of
the material is not allowed.

Figure 1. Glass apparatus for determination of protection


time.
Impregnation of indicator gauze: Indicator gauze is
impregnated by immersion into 0,5 % of congo red
solution in ethanol-water (50%) mixture at 50 C and
after drying at room temperature, it was activated by
addition of 3% solution of N-chlorobenzoyl-o-toluidide in
chloroform.
The
N-chlorobenzoyl-o-toluidide
is
synthesized by following two-step process:
Synthesis of benzoyl-o-toluidide: To a mixture of otoluidine (107 g) and triethylamine (137 ml) dissolved in
595

DEPENDANCEOFCBRNINSULATINGMATERIALSPROTECTIONTIMEUPONBUTYLRUBBERANDFLAMERETARDANTCONTENT

The starting O2 concentration is chosen based on the


experience with previous, relatively similar examined
materials. If no experience or presumptions are available,
the sample is tested in the air and the speed of burning is
monitored. If the burning in the air is slow, than the first
oxygen concentration tested should be set up around 25%.

and 4 respectively. This mixture contains inorganic flame


retardants, zinc-borate and antimony(III)oxide. The
presence of those compounds in the rubber mixture cause
a sharp decrease in protection time which is especially
evident in sample 3 which had highly unsatisfactory
protection time of only 130 min. The presence of one
additional layer of BS-25, as in sample 4 resulted in
increased protection time of 186 min. The specific mass
of the samples 3 and 4 was relatively high, comparing to
those of samples 1 and 2, so it is expected that protective
gear made of such materials would induce higher levels of
heat stress in solders and other users. Samples 3 and 4
showed the highest levels of flammability resistance
comparing to all other tested samples, with oxygen
consumption indices of 33.3 and 37.1 respectively. This
result was expected since these contain the largest
amounts of flame retardants comparing to all other tested
samples.

For the determination of oxygen index, the sample of the


material is placed in the sample holder and tip of the
sample is fired using Bunsen burner, when the sample
starts to burn, the burner is removed and the timer is
switched on. The concentration of the O2 is to high and
should be decreased in the following cases: (a) samples
burn longer than 3 min or (b) the length of the sample that
is burned is longer than 50 mm. The oxygen concentration
should be increased if the sample burning stops before 3
min or if the length of the burning sample is shorter than
50 mm. The continual repeat of the operations described
above is performed until minimal O2 concentration
needed for stabile burning of the material is determined.

Samples 5, 6 and 7 contained two layers of BS mixture on


both sides of polyamide fabric and 2, 1 and 4 layers of
hypalone H-75 mixture. The correlation between specific
mass, protection time, oxygen consumption index and the
variations in the number of H-75 layers was evident. The
higher the number of layers the higher was protection
time and oxygen consumption index. The sample 4 had
fairly good protection time and flammability resistance
(oxygen consumption index was 26.8) but specific mass
was to high in order to consider this material for
incorporation in protective suits, since it would cause a
major heat stress in users.

The oxygen index is calculated from the formula:


n (%) = 100 x O2 / (O2 + N2)
where O2 and N2 represent volume flow of oxygen and
nitrogen respectively in cm3/sec.

3. RESULTS AND DISCUSSION


The description of the rubber material samples, the
specific mass expressed as mass per unit surface of the
sample, protection time and oxygen consumption index as
a measure of flammability resistance are given in Table 1.

Sample 8, which contained two layers of B-25 mixture


and three layers of H-75 mixture, had moderate protection
time of 206 min and fairly good flammability resistance,
with oxygen consumption index of 31.9.

Samples 1 and 2, which contained only BS layer, without


any flame retardants, had fairly good protection time
(greater than 210 min) in relation to their specific mass.
Since they contain a BS layer of relatively small
thickness, in comparison to other examined samples, it
should be expected that they should cause the minimal
heat stress in users, comparing to other samples tested. On
the other hand, since they contain no flame retardants,
oxygen consumption index during burning was relatively
low and resemble the concentration of oxygen in the air,
and therefore exert fairly high level of flammability.

4. CONCLUSIONS
In present communication, we have designed and produced 8
insulating rubber material samples obtained by deposition of
three types of rubber polymer mixture with and without
flame retardants on polyamide fabric. The three types of
mixtures were: BS butyl rubber with no addition of flame
retardants, B-25 mixture consisting of 100 g of butyl rubber268, 60 g of zinc-borate, 30 g of antimony-(III)oxide and 30
g of polyvinylchloride, and H-75 mixture consisting of 100 g
of hypalone (a synthetic rubber, chlorosulfonated
polyethylene, CSPE), 120 g of Martinal A
(aluminum(III)oxide trihydrate), 20 g of antimony(III)oxide
and 40 g of zinc-borate. Combining these three types of
polymers, the eight types of samples were produced. We
inspected their specific mass expressed as mass per surface
area unit (in order to estimate their physiological suitability),
protection time and oxygen consumption index, as a measure
of ther flammability resistance. The best protection time,
with the lowest specific mass showed samples without any
flame retardants, obtained by using BS mixture only. The
presence of inorganic flame retardants as in mixtures B-25
and H-75 reduced the protection time of the samples
considerably, and the higher number of layers shold be
applied in order to retain fairly good protective properties of
the materials. Consequently those samples had good

Table 1. The design, specific mass, protection time and


oxygen consumption index of eight insulating rubber
materials.
No
1
2
3
4
5
6
7
8

BS

B-25

H-75

mass

p. t.

[layer]

[layer]

[layer]

[g/m2]

[min]

O2 [%]

1
1
/
/
2
2
2
/

/
/
3
4
/
/
/
2

/
/
2
2
2
1
4
3

291.5
263.5
421.0
506.0
340.0
315.0
440.0
460.0

>210
>210
130
186
198
198
250
206

20.2
20.3
33.3
37.1
25.4
23.8
26.8
31.9

OTEH2016

The most striking alteration in protection time was


observed for the samples 3 and 4. In this case, the BS
layer was omitted, and instead of this, a BS-25 rubber
mixture was applied, in 3 of 4 layers for the samples 3
596

DEPENDANCEOFCBRNINSULATINGMATERIALSPROTECTIONTIMEUPONBUTYLRUBBERANDFLAMERETARDANTCONTENT

flammability resistance, with the oxygen consumption index


in the range of 31-37 % but considerably higher specific
mass. In conclusion the composition of the rubber mixture
and content f the inorganic flame retardants need to be finely
tuned in the design process of the insulating rubber materials
for CBRN protection purposes to meet all necessary quality
demands such as good protection power, physiological
suitability and flammability resistance.

[2]

[3]

[4]
[5]

ACKNOWLEDGEMENTS
This work was supported by Serbian Ministry of
Education, Science and Technological Development
grant TR34034.

[6]

References
[1] Seni,., Jevremovi,M., Karkali,R.: Proceedings,
2nd Scientific expert conference on defensive

597

OTEH2016

technologies, 1(2), IX-25 (2007).


Karkali,R., Seni,.: Proceedings, 2nd Scientific
expert conference on defensive technologies, 1(2),
IX-28. (2007)
Karkali,R., Radakovi,S., Seni,., Lazarevi,N.,
Jovanovi,D.: Proceedings, 3rd Scientific expert
conference on defensive technologies, Belgrade,
2009.
Vitorovi-Todorovi,M.: Novi glasnik, No 2, 2007.
SORS 6701/04 standard Vreme zatite tankoslojnih
filtrosorpcionih i izolirajuih materijala na dejstvo
para i kapi S-iperita.
JUS Z.C8.023 standard ispitivanje zapaljivosti
plastinih masa i gume- Ispitivanje zapaljivosti
odreivanjem indeksa kiseonika.

FILTERING HALF MASKS USAGE FOR PROTECTION AGAINST


AEROSOL CONTAMINATION OF BIOLOGICAL AGENTS
NEGOVAN IVANKOVI
Military Academy, University of Defense, Belgrade, negovan.ivankovic@gmail.com
DUAN RAJI
Innovation Centre, Faculty of Technology, University of Belgrade, rajic.dusan1@gmail.com
RADOVAN KARKALI
Military Academy, University of Defense, Belgrade, rkarkalic@yahoo.com
DEJAN INDJI
Military Academy, University of Defense, Belgrade, rkarkalic@yahoo.com
DUAN JANKOVI
Military Academy, University of Defense, Belgrade, rkarkalic@yahoo.com
ELJKO SENI
Military Technical Institute, Belgrade, zsenic1@gmail.com
MARINA ILI
Military Technical Institute, Belgrade, lemezalen@yahoo.com

Abstract: The possibility of aerosol contamination of people and the environment with biological agents as a result of
terrorist attacks, natural epidemics or war activities, represents a challenge to all subjects involved to emergency
responding, healthcare workers and the civilian population. Dealing with it primarily requires the use of respiratory
protection devices (RPD). These devices must be made of materials that should fulfill two basic criteria for above
mentioned purpose - high efficiency of filtration against bioaerosols and appropriate breathing resistance.
Consequently it is necessary to overwiev the interdependency of factors: bioaerosol contamination - RPD - protective
efficacy and physiological suitability. The paper presents the possibilities of filtering half masks usage according to the
filtration performance of their embedded filtering materials, with special emphasis on the application of
nanotechnological achievements for improving their filtration efficiency. It is given a review of the methods of its
testing and evaluation for protection against bioaerosols, in the world and in our country.
Keywords: respiratory protection, filtrating half masks, filtering efficiency, biological aerosols.
measures for prevention the occurrence of contamination
and aerosol dissemination of biological agents in the
environment one of the main is implementation of
respiratory protection measures [11]. Although most
efficient, the protection of the human organism with the
means fully insulating type is not suitable for prolonged
application (organism physiological problems due to
difficult work conditions). A possible solution to this
complex problem lies in the use of respiratory protection
devices (RPDs) of filtering type [12], because the usage
of modern materials increase their filtration efficiency to
the maximum, while ensuring appropriate physiological
suitability.

1. INTRODUCTION
The possibility of aerosol contamination of people and the
environment with biological agents as a result of natural
epidemics or combat activities (by terrorist or in war) [14], represents a challenge to all subjects involved to
emergency responding, healthcare workers and the
civilian population [5, 6]. On the complexity of the
situation for an adequate response and preventing the
spread of infection, indicates a strong connection between
the ventilation systems and the transmission of infection
in hospitals, offices, airplanes and ships [7, 8]. The real
threat is reflected in the fact that the effects of aerosol
contamination of people by the intensity and efficiency
are comparable to the effects of intravenous application of
biological agents [9, 10].

2. THE USAGE OF FILTERING HALF MASKS


During the mid-20th century, masks researches were most
focused on the constructional properties of filtering half
masks. During these investigations it was found a

According to the recommendations of the World Health


Organization (WHO), in a series of prescribed preventive
598

FILTERINGHALFMASKSUSAGEFORPROTECTIONAGAINSTAEROSOLCONTAMINATIONOFBIOLOGICALAGENTS

OTEH2016

reduction of 50% in the prevalence of tuberculosis among


nurses who used the mask of 6 fabric layers. After that, it
was introduced several models of nonwovens disposable
masks [13, 14].

(longer ''filter life''), high charging capacity and easy


maintenance. Also, there are some practical limitations such
as folds height of filtering materials, achieving an extremely
low weight and depth of membrane.

Along with medicine using, half masks for particles


filtering were developed, because of the need of workers
employed in the industry for respirators that can be worn
all day. These devices had a simple design, consisting of
nonwoven fibers in the form of a cup with adjustable
metal strip on the nose and the tape that provides half
masks fit the user's face.

The challenge is to find the most optimal solutions.


Possible directions are:

In the seventies of the 20th century, it has been introduced


in the use a model of filtering half mask with a valve that
allowed the user to easily exhale humid air. It has
improved user comfort for usage in conditions of high
heat and humidity, which are critical factors while
wearing a respirator at work.

development of layered composite structure by


adding nanofibers onto substrate filtration material
(which has appropriate mechanical properties and it
is complementary to the nanofiber webs),

impregnation of substrate filtration material with


nanoparticles.

The diameter of electrospun nanofibers are three times


smaller compared to their meltblown counterparts, which
gives to them a major role for increase of their active
surface areas and a significant decrease in their weight.
Nanofibers made of nonwoven polymeric material due to
its increased active surfaces and low weights, are the main
basis for efficient filtering materials of filtering half
masks [27-30]. Possibility for filtering of micron and
submicron particles is provided by pores between
nanofibers for the passage of air, which are usually the
size of about 3 nm.

The development of scientific knowledge, gave the basis


for progress in development of respiratory protection
devices. Filtering half masks types such as respirators and
surgical masks of modern age (Figure 1), are devices for
disposable use and after each use must be rejected [15].
They are planned to cope in all circumstances where are
suitable conditions for a possible air contamination [16] in industrial and pharmaceutical plants, medical facilities,
during users staying among the sick or even apparently
healthy people (''healthy carriers''), in public places
(airports , shopping centers, metro stations, theaters, sport
bowls).

Impregnation of basic filtering materials by the


nanoparticles is carrying out using the ultrafine
nanoparticles of titanium oxide, aluminum oxide
monohydrate, silver oxide, gold, copper, etc. [31-34],
because on this way there is a modification of the active
surface of the filtering material and thereby achieves
increase of their filtration efficiency.
It is importance to notice that the properties of these
composites should include permeability specifications,
certain resistance to penetration by liquids, flammability,
mechanical strength properties and wearer comfort (by
suitable breathing pressure drop).

4. TESTING METHODOLOGIES AND


EVALUATIONS OF FILTERING HALF
MASKS FOR PROTECTION AGAINST
BIOLOGICAL AEROSOLS

b)

Figure 1: Filtering half masks: surgical masks type (a)


and respirator type (b)

3. THE APPLICATION OF
NANOTECHNOLOGY ACHIEVEMENTS
FOR IMPROVEMENT OF FILTERING HALF
MASKS CHARACTERISTICS

Methodologies of testing and evaluation of filtering half


masks are based on monitoring parameters that stand out
in the mechanism of the aerosol filtration by filter-media.
When breathing, contaminated air passes through the
filtering material and enters the respiratory system.
During exhalation exhaled air again passes through the
filtering material out into the atmosphere. On this way, air
is filtrated in both directions (Figure 2). Thus, the
protective efficacy of these agents is almost entirely
driven by the number of layers of filtering materials and
their characteristics, on condition that it provides
appropriate physiological suitability of device for usage.

Filtering half masks are made of various types of non-woven


materials, which may be of fibers based natural or man-made
materials. Development goals for improving the filtering
media of this kind are a long time use of filtering material

According the aforementioned, an evaluation of the RPD


efficiency is giving on the basis of testing of aerosol
contaminat inward leakage towards the interior of
protective means, as well as efficiency and dynamics of
aerosol contaminants filtration through filtration medium.

With the development of RPDs, in many countries of the


world increased the understanding of the positive effects
of their usage for prevention the spread of infections, as
well as world and national legislation which regulate it
through national and local protection measures [17-26].

599

FILTERINGHALFMASKSUSAGEFORPROTECTIONAGAINSTAEROSOLCONTAMINATIONOFBIOLOGICALAGENTS
Exhaled
air

For this tests are used methods based on rinsing


microorganisms deposited in the filtering material,
incubated under defined conditions and then counted
using microscopic methods.

Before
filtration

After
filtration
Contaminated
air

OTEH2016

On the other hand, as adequate test substance of


aerosolized biological agents and toxins and in not very
good equipped laboratories are commonly used solid
aerosols of monodisperse urea or sodium chloride (they
are both suitable, due they have a granulometric particle
size distribution in the range of bioaerosols, they are
harmless and inexpensive). For these tests, determination
of the concentration of these aerosols is performed by
flame photometry method, and granulometric distribution
by electric particles analyzer [39, 40].

Clean
air

Figure 2: The principle of filtering half masks


functioning
Despite numerous researches related to the biological
aerosols filtration by filtering materials, there is no
unified standard methodology that could be applied for
RPD testing and evaluation in Europe and worldwide. It
is because:

5. CONCLUSION

there is a huge range of biological particles that may


negatively influence to humans,

It can be concluded that awareness of the prevention of


aerosol contamination and the spread of biological agents
by personal respiratory protection devices is at a high
level in many countries of the world. During the last two
decades can be observed intensified use of
nanotechnology achievements on:

for many of them scientists are not using the same


adequate test substances, so in accordance with that,
methodologies are different.
Nevertheless, there is a similar practice applied by a
numerous scientists to test the dynamic flow of
adequate test substance (bioaerosol surrogates or/and
indicator organisms) through filtering materials in the
closed measuring system in laboratory conditions [35 41]. As a result of this action, the deposition of particles
on the nonwovens takes place. If we are looking filtration
phenomenons, basic importance for particles deposition
have the flow speed, particles size, particles shape and on
the filtering material parameters (such as the thickness of
fibres, porosity and posibility of attraction by electrostatic
forces). For measuring system, the importance is on
particle detection solution.

improving the filtering characteristic of the filtering


materials, and
improving existing and developing new methods for
testing and evaluating the protection effectiveness of
filtering half masks by all testing elements (surrogates,
test substance, test procedures, instrumentation).
According to all exposed, filtering half-mask can serve as
an effective and convenient means of respiratory
protection of all subjects involved to emergency
responding (military, police, firefighters), industrial and
pharmaceutical facilities, healthcare workers and the
civilian population.

Many works concerned research aimed at confirming the


efficiency of filtering half masks that protect a user of
RPD against large drops of bioaerosol that come through
speaking, coughing, sneezing of any kind of bioaerosol.

ACKNOWLEDGEMENT
The Ministry of Education, Science and Technological
Development of the Republic of Serbia supported this
work, Grant No TR34034 (2011-2016).

As surrogates and indicator organisms for aerosol tests in


better equipped laboratories are used bacteriophages
because they are perceived to be good models for the
study of airborne viruses (safe to use in controlled
conditions, display structural features similar to those of
human and animal viruses, relatively easy to produce in
large quantities). Their representatives are:

References
[1] Aitken,C., Jeffries,D.J.: Nosocomial spread of viral
disease, Clinical Microbiology Reviews, 14 (2001)
528-546.
[2] Li,Y., Huang,X., Yu,I.T., Wong,T.W, Qian,H.: Role
of air distribution in SARS transmission during the
largest nosocomial outbreak in Hong Kong, Indoor
Air, 15 (2005) 83-95.
[3] Tucker,J.B.: The current bioweapons threat. In
Hunger,I., Radosavljevic,V., Belojevic,G. &
Rotz,L.D. (Eds.), Biopreparedness and Public
Health. Springer, Amsterdam, 2013.
[4] Fennelly,K.P.,
Davidow,A.L.,
Miller,S.L.,
Connell,N., Ellner,J.J.: 'Airborne infection with
Bacillus anthracis - from mills to mail, Emerging
Infectious Diseases,10 (2004) 996-1001.
[5] Verreault,D., Moineau,S., Duchaine,C.: Methods for

PR772 (Tectiviridae family), 6 (Cystoviridae family) excellent surrogates for human influenza A virus H1N1
(Orthomyxoviridae family) [37],
MS2 (Leviviridae family), X174 (Microviridae
family) - excellent surrogates for Newcastle disease
virus (Paramyxoviridae family) [37],
Mycobacterium abscessus (Mycobacteriaceae family),
Staphylococcus
epidermidis
(Staphylococcaceae
family), and Bacillus subtilis (Bacillaceae family),
Micrococcus luteus (Micrococcaceae family) and
Pseudomonas alcaligenes (Pseudomonadaceae family)
[38, 39] - surrogates for wide range of biological
aerosols.
600

FILTERINGHALFMASKSUSAGEFORPROTECTIONAGAINSTAEROSOLCONTAMINATIONOFBIOLOGICALAGENTS

OTEH2016

Services, Public Health Service, Centers for Disease


Control, National Institute for Occupational Safety
and Health, DHHS (NIOSH), (2005).
[20] Occupational Safety and Health Administration.
OSHA Technical Manual (OTM), Section VIII:
Chapter 2 - Respiratory Protection. United States
Department of Labor Occupational Safety and
Health Administration, (1999).
[21] Center for Disease Control and Prevention.
Guidelines for Preventing the Transmission of
Mycobacterium tuberculosis in Health-Care Settings.
Morbidity and mortality weekly report, (2005).
[22] National Health and Medical Research Council.
Australian Government. Australian guideline for the
Prevention and Control of Infection in Healthcare,
(2010).
[23] Health Protection Agency UK. SARS - hospital
infection control guidance, (2005).
[24] Public Health Agency of Canada. Disease Prevention
and Control Guidelines - Routine Practices and
Additional Precautions for Preventing the
Transmission of Infection in Healthcare Settings,
(2013).
[25] Breiman,R.F., Evans,M.R., Preiser,W., Maguire,J.,
Schnur,A., Li,A., Bekedam,H., MacKenzie,J.S.: Role
of China in the quest to define and control Severe
Acute Respiratory Syndrome, Emerging Infectious
Diseases, 9 (2003) 1037-1041.
[26] Seale,H., MacIntyre,R., McLaws,ML., Dwyer,D.,
Health care worker practices around face mask use
in hospitals in Hanoi, Vietnam, International Journal
of Infectious Diseases, 16 (2012) 317473.
[27] Thandavamoorthy,S., Gopinath,N., Ramkumar,S.S.:
honeycomb
polyurethane
Selfassembled
nanofibers, ournal of Applied Polymer Science, 101
(2006) 3121 3124.
[28] Thandavamoorthy,S.,
Bhat,G.S.,
Tock,R.W.,
Parameswaran,S., Ramkumar,S.S.: Electrospinning
of nanofibers, Journal of Applied Polymer Science,
96 (2005) 557569.
[29] Kosmider,K., Scott,J.: Polymeric Nanofibers exhibit
an
enhanced
air
filtration
performance,
Filtration&Separation, 39 (2002) 20-22.
[30] Schreuder-Gibson,H., Gipson,P., Wadsworth,L.,
Hemphill,S., Vontorcik,J.: Effect of filter
deformation on the filtration and air flow for
elastomeric
nonwoven
media,
Advanced
Filtration&Separation Technologies, 15 (2002) 525
537.
[31] Qi,K.H., Wang,X.W., Xin,J.H.: Photocatalytic selfcleaning textiles based on nanocrystalline titanium
dioxide, Text Res , 81 (2011) 101-110.
[32] Kaledin,L.A., Tepper,F., Kaledin,T.G.: Long-range
attractive forces extending from alumina nanofiber
surface, International Journal of Smart and Nano
Materials, 5 (2014) 133151.
[33] Xiang,D.X.,
Chen,Q.,
Pang,L.,
Zheng,C.L.,
Inhibitory effects of silver nanoparticles on H1N1
influenza A virus in vitro, Journal of Virological

sampling of airborne viruses, Microbiology and


Molecular Biology Reviews, 72 (2008) 413-444.
[6] Zalini,Y.: Combating and reducing the risk of
biological threat', Jounal of Defence and Security, 1
(2010) 1-15.
[7] Li,Y., Leung,G., Tang,J.W., Yang,X., Chao,C.Y.,
Lin,J.Z., Lu,J.W., Nielsen,P.V., Niu,J., Qian,H.,
Sleigh,A.C., Su,H.J, Sundell,J., Wong,T.W.,
Yuen,P.L.: Role of ventilation in airborne
transmission of infectious agents in the built
environment: a multidisciplinary systematic review,
Indoor Air, 17 (2007) 2-18.
[8] Edwards,D.A., Man,J.C., Brand,P., Katstra,J.P.,
Sommerer,K., Stone,H.A., Nardell,E., Scheuch,G.:
Inhaling to mitigate exhaled bioaerosols'',
Proceedings of the National Academy of Sciences
USA, 101 (2004) 17383-7388.
[9] Tyrrell,D.A.J.: How do viruses invade mucous
surfaces, Philosophical Transactions of The Royal
Society of London B Biological Sciences, 303
(1983) 75-84.
[10] Heyder,J., Gebhart,J., Rudolf,G., Schiller,C.F.,
Stahlhofen,W.: Deposition of particles in the human
respiratory track in the size range of 0.005-15 m,
Journal of Aerosol Science, 17 (1986) 811-825.
[11] World Health Organization. WHO global influenza
preparedness plan: the role of WHO and
recommendations for national measures before and
during pandemics. Annex 1. Recommendations for
nonpharmaceutical public health interventions,
(2005).
[12] World Health Organization. Public health response to
biological and chemical weapons - WHO guidance.
Annex 3. Biological Agents, (2004).
[13] McNett,,E.H.: The face mask in tuberculosis. How
the cheesecloth face mask has been developed as a
protective agent in tuberculosis, American Journal of
Nursing, 49 (1949) 32-36.
[14] Quesnel,L.B.: The efficiency of surgical masks of
varying design and composition, British Journal of
Surgery, 62 (1975) 936-940.
[15] Center for Disease Control and Prevention. Interim
Recommendations for Facemask and Respirator Use
to Reduce Novel Influenza A (H1N1) Virus
Transmission, (2009).
[16] Kowalski,W.J., Bahnfleth,W.: Airborne respiratory
diseases and mechanical systems for control of
microbes, HPAC Engineering, 70 (1998) 34-48.
[17] World Health Organization. Advice on the use of
masks in the community setting in Influenza A
(H1N1) outbreaks, (2009).
[18] Bell,D., Nicoll,A., Fukuda,K., Horby,P., Monto,A.,
Nonpharmaceutical Interventions for Pandemic
Influenza, National and Community Measures,
World Health Organization Writing Group'',
Emerging Infectious Diseases, 12 (2006) 88-94.
[19] National Institute for Occupational Safety and
Health. NIOSH respirator selection logic. Cincinnati,
OH: U.S. Department of Health and Human
601

FILTERINGHALFMASKSUSAGEFORPROTECTIONAGAINSTAEROSOLCONTAMINATIONOFBIOLOGICALAGENTS

Methods,178 (2011) 137-142.


[34] Takahashi,Y., Tatsuma,T.: Electrodeposition of
thermally stable gold and silver nanoparticle
ensembles through a thin alumina nanomask,
Nanoscale, 2 (2010) 1494-1499.
[35] Miakiewicz-Peska,E, ebkowska,M.: Effect of
antimicrobial air filter treatment on bacterial
survival, Fibres & Textiles in Eastern Europe, 19
(2011) 73-77.
[36] Salvatorelli,G., Lorenzi,S., Finzi,G., Romanini,L.:
Evaluation of a new devices against bacterial
penetration, International Journal of Disaster
Medicine, 4 (2006) 103-109.
[37] Turgeon,N., Toulouse,M-J., Martel,B., Moineau,S.,
Duchaine,C.: Comparison of Five Bacteriophages as

OTEH2016

Models for Viral Aerosol Studies, Applied and


Environmental Microbiology, 80 (2014) 42424250.
[38] McCullough,N.V.,
Brosseau,L.M.,
Vesley,D.:
Collection of three bacterial aerosols by respirator
and surgical mask filters under varying conditions of
flow and relative humidity, The Annals of
Occupational Hygiene, 41 (1997) 677-690.
[39] Wake,D.,
Bowry,A.,
Crook,B.,
Brown,R.:
Performance of respirator filters and surgical masks
against bacterial aerosols, Journal of Aerosol
Science, 28 (1997) 1311-1329.
[40] Ivankovic,N., Rajic,D., Ivankovic,N., Senic,Z.,
Djurovic,B., Vukovic,N., Karkalic,R.: Efficacy of
respiratory antimicrobial protection devices,
Scientific Technical Review, 64 (2014) 47-53.

602

LASERS POSSIBILITIES IN BRASS SURFACE CLEANING


BOJANA RADOJKOVI
Institute Gosa, Belgrade, Serbia, bojana.radojkovic@institutgosa.rs
SLAVICA RISTI
Institute Gosa, Belgrade, Serbia, slavica.ristic@institutgosa.rs
SUZANA POLI
Central Institute for conservation, Belgrade, Serbia, suzanapolicradovanovic@gmail.com
BORE JEGDI
IHTM, Belgrade, Serbia, borejegdic@yahoo.com
ALEKSANDAR KRMPOT
Institute of Physics, Belgrade, Serbia, aleksandar.krmpot@ipb.ac.rs
BRANISLAV SALATI
Institute of Physics, Belgrade, Serbia, branislav.salatic@ipb.ac.rs
FILIP VUETI
Faculty of Mechanical Engineering, University of Belgrade, vucetic_filip90@yahoo.com

Abstract: A large number of metals and alloys are widely used in the defense industry, not only in weaponry and defense
equipment, but also for communications equipment and infrastructure. These materials are often exposed to extreme
condition: high or low temperatures, humidity, chemical agents, wear and so on. They have to have the ability to withstand
corrosion, temperature extremes and wear. Different types of weapons and equipment require adequate maintenance and
cleaning. As an efficient cleaning method, lasers are widely used in many areas for different type of materials. This paper
presents the results of the laser cleaning effects on tarnished brass plate. Nd:YAG and Er:Glass lasers were used to clean
the tarnish layer from surface. Effects on the laser irradiated zones were investigated by optical and SEM microscopy and
EDX analysis. Also surface's profile roughness and surface hardness were measured. Some parameters for successfully and
safely cleaning of brass surface without degrading the surrounding material were determined.
Keywords: brass, corrosion, laser cleaning, SEM-EDX, hardness.
It is also widely used for missile components, aircraft
turnbuckle barrels and for indoor and outdoor decorative
applications including screens, elevators, signs, frames and
decorative fascia [5]. C86500 High-Strength Yellow Brass
(about 55-60% copper and about 42% zinc) has a good
hardness, high corrosion and wear resistance, and it is used
in aerospace-related products in heavy duty high wear
machine components [7].

1. INTRODUCTION
Copper and its alloys are widely used in defense and
military industry. They have good corrosion resistance,
superior electrical and thermal conductivity, easiness of
fabricating and joining, wide range of adequate mechanical
properties and resistance to biofouling [1-6].
Different brass types are developed for application in
defense industry. Properties of the brass can be changed by
varying the proportions of copper and zinc, thus allowing
hard and soft brasses: C26000 brass, called cartridge-grade
brass, generally is composed of 70% copper and 30% zinc
[1,3]. It is used in ammunition cartridge cases, mechanical
housings for lighters, shells - mechanical housings for
ammunition, etc [4]. C46400 Naval Brass is copper alloyed
with zinc and tin to provide improved strength, corrosion
resistance and machinability. It has extensive marine
application because its good strength and corrosion
resistance properties. Typical industrial applications for
C464 brass include deck hardware, lock pins, pump and
valve stem nuts, heat exchanger tubes, and propeller shafts.

Corrosion resistance of brasses in aqueous solutions does


not change markedly as long as the zinc content is up to
15%. The lower content of zinc leads to increase of brass
corrosion resistance [2]. When the brass object is exposed to
ambient air it naturally tarnish during the time since on its
surface are formed oxide layer in the thickness of few
nanometers [8]. This layer is mixture of Cu and Zn oxides
(ZnO and Cu2O). Also on the surface can be observed other
contaminants caused by different handling operations and
storage conditions.
Cleaning, de-coating and surface preparation are important
steps in material fabrication processing. Also, cleaning is
603

OTEH2016

LASERSPOSSIBILITIESINBRASSSURFACECLEANING

important in objects maintenance and prevention of their


deterioration. Different types of weapons and equipment
require adequate maintenance and cleaning since these
materials are often exposed to high or low temperatures,
humidity, chemical agents, wear and so on, and they have to
have the ability to withstand corrosion, temperature
extremes and wear.

energy dispersive X-ray spectrometer (EDX), INSA350 and


it was used for analysis of sample chemical composition and
for the determination of the changes in the material
composition of the irradiated zones.

Laser technology provides a safe, environmentally friendly


and effective way to improve cleaning processes of a wide
range of materials. It is continuously developing for
cleaning of stones [9], ceramics [10-12], glass [13,14],
metal objects [12,15-17], textile [16,18], etc, operated
robotically, automated, in-line or hand-held. Mateo et all.
[8] are among the rare who have examined laser cleaning of
brass surface. They have used Nd:YAG laser with 532 nm
wavelength and they results demonstrate the successful
removal of different types of coatings from brass samples.

Picture 1. Brass plate with treated zones


Table 1. Experimental paramitars of analysed zones
number of
laser type
, nm
zone
E, mJ
pulses
Er:Glass
1540
zone 1
8,1
10
Er:Glass
1540
zone 3
8,1
5
Er:Glass
1540
zone 5
8,1
1
Nd:YAG
532
zone 1
10,2
10
Nd:YAG
532
zone 2
10,2
5
Nd:YAG
532
zone 3
10,2
1
Nd:YAG
1064
zone 1
10
10
Nd:YAG
1064
zone 7
2
10

The adapted laser technology can removes contaminants,


production residue and coatings without damaging the
substrate. For satisfactory results it is necessary to optimize
adequate laser type and its parameters for concrete material.
The capability of laser ablation to perform safe cleaning of
tarnished brass sample is demonstrated in this work. In
analysis were used Nd:YAG and Er:Glass lasers. Within
accessible lasers wavelengths, lasers parameters as are
number of pulses and energies of laser beams were changed
in defining of optimal parameters for successfully and safely
cleaning of brass surface without degrading the surrounding
material. For morphological analysis before and after laser
treatment were used optical and SEM microscopy while for
chemical analysis were used EDX analysis. Measuring of
surface's profile roughness and surface hardness were
analyzed with profilometer and microhardness tester.

Profilometry method was used for determination of the


geometric parameters in irradiated surface with TR200
Portable Surface Roughness Tester. Microhardness
measurement of material surface was performed with Micro
Vickers Hardness Tester: TH710.

2. EXPERIMENT
Naturally tarnished brass plate (picture 1) were treated with
noncommercial Er:Glass and Nd:YAG lasers experimentally
developed in Laboratory of Center for photonics, Institute of
Physics, Belgrade. Er:Glass laser wavelength is =1540 nm,
generated in the TEM01 mode. The energy of laser beam can
be changed till maximal value of =8 mJ. The pulse
duration is 50ns. Nd:YAG laser can operate with two
wavelengths of 1064 nm and 532 nm, TEM00 mode.
Maximal values of laser beam energy for wavelength of
1064 nm is 10 mJ, and for wavelength of 532 nm is 10,2 mJ.
In both case laser pulse duration is 80ns. The experimental
parameters for eight of zones irradiated by Er:Glass and
Nd:YAG lasers are presented in the table 1.

3. RESULTS AND DISCUSSION


The laser beam can interact with a solid surface producing
various effects such as ablation of corrosion and dirt on the
sample. Surface cleaning, based on laser ablation, is a
delicate and irreversible process, followed by many
potential complications. It is very important to choose the
most suitable laser cleaning methodology and laser
parameters in accordance with the material properties.
Laser ablation is a process consisting of optical, photo
thermal, photo acoustic and photo mechanical phenomena,
in which the absorption of a large number of photons heats
the material and performs surface modification. The exact
nature of the material ablation by laser irradiation depends
on the used material and laser processing parameters. When
the laser fluence is low, ablation photo thermal mechanisms
include material evaporation and sublimation. With higher
fluence, heterogeneous nucleation of vapour bubbles leads
to normal boiling.

In experiment with Er:Glass laser, laser radiation impact on


brass surface depending on the number of pulses was
examined. Energy of laser beam was constant 8,1 mJ.
Similar experiment was conducted with Nd:YAG laser with
532 nm wavelength where laser beam energy was 10,2 mJ.
Constant numbers of pulses with changing of laser beam
energy were used in experiment with Nd:YAG laser with
wavelength of 1064 nm.

In all cases, a highly directed plume ejected from the irradiated


zone is present and it lead to material removal. The dense
vapour plume may contain solid and liquid clusters of material.
Resolidification of expelled liquid and condensation of plume
material into thin films can alter the topography at the
surrounding areas of the ablated region [19].

The study of laser radiation impact on the morphology of


sample surface was monitored by an optical microscope
(OM), Olympus CX41 and scanning electron microscope
(SEM), JEOL JSM-6610LB. The SEM is linked to the
604

g)

Nd:YAG
(1064nm), zone 1

i)

Nd:YAG
(1064nm), zone 7

d)

e)

f)
Er:Glass,
zone 5

c)

Nd:YAG,
(532nm), zone 1

Er:Glass,
zone 3

b)

Nd:YAG,
(532nm), zone 2

Er:Glass,
zone 1

a)

Nd:YAG
(532nm), zone 3

LASERSPOSSIBILITIESINBRASSSURFACECLEANING

OM 100x

OTEH2016

OM 200x
SEM 250x

Picture 2. OM and SEM images of treated zones

605

SEM 700x

OTEH2016

LASERSPOSSIBILITIESINBRASSSURFACECLEANING

Beside the ablation, on the surface of material can be


occurring some other processes. When the fluence of laser
irradiation is below the melting threshold a variety of
temperature dependent processes within the solid material
can be activated: impurity doping, the re-organization of the
crystal structure, and sintering of porous materials. Also,
due to large temperature gradient in narrow space can lead
to rapid self-quenching of the material forming highly nonequilibrium structures.
Kinetics energy of chemical reactions can be increased and
rapid transformations to high-temperature crystal phases can
occur.
Also, due to rapid and large temperature gradients, there can
be induced thermal stresses and thermo elastic excitation of
acoustic waves and, consequently, can lead to work
hardening, warping, cracking and similar mechanical
responses.

a)

When the fluence of laser irradiation is above the melting


threshold, transient pools of molten material on the surface
can be formed. Depending on recristalization dynamic there
can be formed defects and supersaturated solutes as well as
metastable material phases when the resolidification rate is
high. In the case of slow resolidification rate,
recrystallization of larger grains than the original material
can take place [19].

b)

The analyses of the cleaning zones, on brass (picture 2),


show that the morphological modifications most depend on
the number of pulses and laser fluences, and less dependent
on the laser wavelength.
3.1. OM and SEM microscopy
OM and SEM images of zones treated by Er:Glass laser
(picture 2 a), b) and c)) show that single laser pulse, with
8mJ, is sufficient to start cleaning of the corrosion layer.
The larger number of pulses leads to the melting of the base
material. Surfaces of the zone 1 (picture 2 a)) show craters
and solidified metal splashes carried away from the central
zones. Better results are obtained when laser beam with 4
mJ energy with 5 pulses is used. Energy above 4mJ is above
the threshold of melting and can lead to the formation of
molten material on the surface (zone 1, picture 2. a)).

c)
Picture 3. Er:Glass laser treated zone 1: a) SEM images
and b), c) characteristic EDX spectra
The EDX analyses were performed in the zones cleaned by
Nd:YAG laser, too. The results clearly indicate the
concentration changes of almost all elements in dependence
on the measurement point locations (spectrum 1, 2 and 3 in
different areas of irradiation). The spectrum shows that, in
periphery of cleaned zones and in no cleaned surface parts
exist elements as are Si, S, O, Fe, P and Al (picture 4d)).

Zones 1-3, (picture 2 d), e), f)) where Nd:YAG laser with
wavelength 532 nm and zones 1 and 7 (picture 2 g), and i)),
where 1064 nm is used for cleaning, have not melt metal. It
can be concluded that the absorption of the beam with a
wavelength =1540 nm is better compared to the absorption
of used Nd:YAG wavelengths.
It is clear from the images that the ablated diameter
increases with increasing shot number, thus directly
indicating an accumulation effect.

3.2. EDX analysis


The EDX analysis was used to determine the qualitative and
quantitative elemental differences between the cleaned and
no cleaned surface parts. The locations where the spectrum
were taken in Er:Glass laser treated zone 1, are shown in
picture 3a). EDX spectrum 1 (from central part of zone 1)
shows only Cu and Zn, while, spectrum 2, beside that has P,
S, O and Na.

a)
606

OTEH2016

LASERSPOSSIBILITIESINBRASSSURFACECLEANING

b)

c)
Picture 5. Nd:YAG (1064 nm) laser treated zone 3:
a) SEM images and b), c) and d) characteristic EDX
spectrum

3.3. Profilometry and microhardness


The roughness can be characterized by several parameters
and functions (such as height parameters, wavelength
parameters and spacing and hybrid parameters). The Mean
Roughness (Roughness Average Ra) is the arithmetic
average of the absolute values of the roughness profile
ordinates.

c)

The results of profilometry test above zones irradiated by


Nd:YAG (1064 nm) laser are presented in picture 6. Ra
value measured on the zones array irradiated with Nd:YAG
(1064 nm) laser is 2,491 m. The depth of cleaned zones
varies between 2.4 and 10 m. Very similar results were
recorded for Nd:YAG (532 nm) laser treated zones.

d)
Picture 4. Nd:YAG (532 nm) laser treated zone 3: a)
SEM images and b), c) and d) characteristic EDX spectra

The variations of depths for Er:Glass laser cleaned zones are


higher than for Nd:YAG laser.

Spectrum 2, picture 5c), is recorded in no-cleaned zone and


contains K, Na, Ca and Mg elements that exist in surface
layer of sample.
Spectrum 1, picture 5 b), recorded in the centre of irradiated
zone confirm that there is not elements presented in
contaminants, so the undesired layer have been removed.

Picture 6. Profilometry above zones Nd:YAG (1064 nm)


Microhardness testing of irradiated material surface
provides valuable information for the important material
properties: resistance to deformation, friction and abrasion.
Controlling these properties can contribute to a successful
application of materials by helping to prevent wear and
premature product failure.

a)

Joseph et all. [20] have investigated the cause and


mechanism of failure on the aircraft component.
Hardness tests have been found to be very useful for
materials evaluation, quality control of manufacturing
processes and research and development efforts.
Hardness refers to the ability of a material to resist abrasion,
penetration, cutting action, or permanent distortion. The
hardness of a metal is directly related to its machinability,

b)
607

OTEH2016

LASERSPOSSIBILITIESINBRASSSURFACECLEANING

since toughness decreases as hardness increases.

[8] Mateo, M. P., Ctvrtnickova, T., Fernandez, E., Ramos,


J. A., Yanez, A., Nicolas, G., Laser cleaning of
varnishes and contaminants on brass, Applied Surface
Science 255 (2009) 55795583.
[9] Siano, S., Giamello, M., Bartoli, L., Mencaglia, A.,
Parfenov, V., Salimbeni, R., Laser Cleaning of Stone
by Different Laser Pulse Duration and Wavelength,
Laser Physics, 18( 1) (2008), 2736.
[10] Casasola, R., Rincn, J., M., Romero, M., Glassceramic glazes for ceramic tiles a review, Journal of
Material Science, 47 (2012) 553-582.
[11] Sola, D., Escartn, A., Cases, R., Pena, J.I., Laser
ablation of advanced ceramics and glass-ceramic
materials: Reference position dependence, Applied
Surface Science 257 (2011) 54135419.
[12] Ristic, S., Polic-Radovanovic, S., Katavic, B., Kutin,
M., Nikolic, Z., Puharic, M., Ruby laser beam
interaction with ceramic and copper artifacts, Journal
of Russian Laser Research, 31 (4) (2010), 380-389.
[13] Weng,, T.-S., Tsai, C.-H., Laser-induced backside wet
cleaning technique for glass substrates, Applied
Physics A, 116 (2) (2014), 597-604.
[14] B. Radojkovi, S. Risti and S. Poli-Radovanovi,
Study of ruby laser beam interaction with glass, FME
Transactions 41, 2013, 109113.
[15] Ali, S. N., Taha, Z. A., Mansour, T. S., Laser Cleaning
Using Q-Switched Nd:YAG Laser of Low Carbon
Steel Alloys, Advances in Condensed Matter Physics,
2014 (2014), Article ID 418142, 6 pages.
[16] Risti, S., Poli, S., Radojkovi, B., Laser cleaning of
textile artifacts with corroded metal threads, 6th
International Scientific Conference On Defensive
Technologies OTEH 2014, Vojno-tehniki institut,
Beograd 9-10. oktobar 2014., 649-655.
[17] Bergstrm, D., The Absorption of Laser Light by
Rough Metal Surfaces, Doctoral Thesis, Department of
Engineering, Physics and Mathematics Mid Sweden
University stersund, Sweden 2008.
[18] Ferrero, F. and Testore, F., Surface degradation of linen
textiles induced by laser treatment: comparison with
electron beam and heat source, Autex Research
Journal, 2 (2002), 109114.
[19] Brown, M.S. and Arnold, C. B., Fundamentals of
Laser-Material Interaction and Application to
Multiscale Surface Modification, Chapter in Laser
Precision Microfabrication, of the series Springer
Series in Materials Science, 135 (2010), 91-120.
[20] Joseph,O., Ajayi,J., Ajayi,OO., Fayomi,S., Joseph,O.,
Gbenebor,O., Material performance investigation on
the failure of an aircraft (ABT-18) Nose Wheel Strut.,
International Journal of Industrial Engineering &
Technology (IJIET), 2 (3) (2012), 1-6.

Microhardness measurement of investigated brass plate


surface, performed with Micro Vickers Hardness Tester:
TH710 didnt give useful results because the irradiated
zones were smaller in diameter compared to the probe.

5. CONCLUSION
The main aim of this research was the investigation of the
morphological and chemical changes on the surface of brass
sample cleaned with three laser wavelengths.
The Er:Glass laser and Nd:YAG laser irradiation, with the
energy below 10 mJ (and fluences below 0.5 J/cm2), are
suitable for brass surface cleaning. The cleaning with more
energy laser beams lead to the melting of the metal and are
not desirable for cleaning the corrosive layers.
During the laser cleaning, different processes are occurring,
some are useful, others harmful to the basic material.
The obtain results show that the laser interaction with
materials is a complex process which depends on many
parameters related to laser and sample. A proper and save
application of lasers for cleaning the brass surface, requires
the comprehensive analyses of the material topographic and
chemical modifications after laser treatments.

ACKNOWLEDGMENTS
The authors thank the Ministry of Culture of the Republic of
Serbia for its financial support under Project 1/7414.7.2011. This research was also financially supported by
the Ministry of Education, Science and Technological
Development of Serbia under Project TR 34028.

References
[1] Walker, R. E., Cartridges and Firearm Identifi cation,
CRC Press, Taylor & Francis Group, Boca Raton, FL,
2013.
[2] http://www.totalmateria.com/Article16.htm, July 27th
2016.
[3] http://www.farmerscopper.com/cartridge-brass.html,
July 27th 2016
[4] http://www.dura-barms.com/bronze/brass/c26000.cfm,
July 27th 2016
http://busbymetals.com/products/brass-and-navalbrass/c46400/, July 27th 2016
[5] http://www.concast.com/files/aerospace_brochure.pdf,
July 27th 2016
[6] Anyadike, N., Copper: A material for the new
millennium, Woodhead Publishing Ltd, Cambridge,
England, 2002.
[7] Qiu, P., Leygraf, C., Initial oxidation of brass induced
by humidified air, Applied Surface Science, 258
(2011) 12351241.

608

EFFECT OF IF-WS2 NANOPARTICLES ADDITION ON PHYSICALMECHANICAL AND RHEOLOGICAL PROPERTIES AND ON CHEMICAL
RESISTANCE OF POLYURETHANE PAINT
DRAGANA S. LAZI
Military Technical Institute, Belgrade, lazicdragana85@gmail.com
DANICA M. SIMI
Military Technical Institute, Belgrade, simic_danica@yahoo.com
ALEKSANDRA D. SAMOLOV
Military Technical Institute, Belgrade, aleksandrasamolov@gmail.com

Abstract: In this paper a possibility of improving properties of protective coatings by adding nanoparticles of IF-WS2 is
examined. Nanoparticles were added to a standard polyurethane paint for military camouflage protection, in a
concentration of 1 wt.%, and dispersed in paint by ultrasonic irradiation. After the homogenization on the magnetic stirrer
the paints were applied to standard steel plates, and dried, for examinations of physical-mechanical properties and on
chemical resistance. The following physical-mechanical properties were compared for paint without and with IF-WS2
nanoparticles: hardness, flexibility, elasticity, abrasion resistance and stickness. Camouflage properties were also examined
- IR reflection and colorimetry. Also, resistances to salt water and agressive media were observed. The effect of adding IFWS2 on rheological properties of the two examined paints has been examined using Dynamic Mechanical Analysis (DMA),
observing viscosity as the function of the shear rate.
Keywords: polymeric coating, polyurethane paint, tungsten disulfide, nanoparticles.

absorbance, reflectance or other properties [12], whether we


use this dyes for colouring metals or textiles [13]. These
measurements can give us valuable information regarding
colour coordinates of dyes as well, which are their main
characteristics, especially when it comes to military
camouflage paint.

1. INTRODUCTION
Transition-metal dichalogenides (MoS2, WS2, NbS2, etc.),
due to their excellent mechanical properties, are used for a
wide range of applications, including aerospace and
automotive technology, load bearing and release
mechanisms, as solid lubricants, corrosion protection etc [18]. In the form of inorganic fullerene-like particles, with
unique morphology, spherical and closed structure, they
possess a chemical inertness and a high elasticity. Due to
these exceptional characteristics, inorganic fullerene-like
particles, such as tungsten disulfide IF-WS2, are recognized
as promising materials and promising fillers of the
composites and are extensively studied for their ability to
control wetting, adhesion, lubrication on surfaces and
interfaces, and to achieve good corrosion protection and
wear resistance, thus, they may be used as addition for
different types of protective coatings [9-11].

2. MATERIALS AND EXPERIMENTAL


PROCEDURE
2.1. Preparation of polyurethane paint samples
Inorganic fullerene-like tungsten disulphide nanoparticles
(NanoLub) were added into the standard polyurethane
black coating for military camouflage protection (supplied
by Company PITURA d.o.o.) in concentration of 1 wt%.
Dispersion and particle deagglomeration were performed in
ultrasonic bath for 45 min at room temperature. After
homogenization on a magnetic stirrer during 30 min, paints
were directly applied using an aplicator made in Military
Technical institute, onto previously prepared steel plates in
thicknes of a layer approximately 50 microns, and left to dry
(Picture 1). The number and size of the steel plates are taken
according to relevant military standards for examinations of
military camouflage paint.

In this paper, the possibility of using IF-WS2 nanoparticles


as a filler in a standard black polyurethane coating for
military camouflage protection (PUR black) is examined
and the properties of the paint with and without
nanoparticles are compared and analyzed, regarding the
requirements of military standards concerning paint
coatings.
Spectrophotometric measurements are wildly spread in
characterization of dyes whether it is measured their
609

EFFECTOFIFWS2NANOPARTICLESADDITIONONPHYSICALMECHANICALANDRHEOLOGICALPROPERTIES

OTEH2016

Rheometer MCR-302 (Anton Paar GmbH) equipped with


plate-plate fixtures for testing liquid samples (Picture 2,
[19]). The DMTA tests, in order to evaluate the rheological
properties of the tested paints, have included test of
dependence viscosity - shear rate. Tests were performed at a
chosen frequency of 1 Hz at the ambient temperature, which
was approximately 20C.

Picture 1. Prepared samples black camouflage paint

2.2. Physical-mechanical properties examinations


The examined paint with and without nanoparticles of IFWS2 was tested according to military stadards SORS 1564
[14] and SORS 1634 [15]. The following properties were
determined and analyzed:
Stickness, using device Elcometer according to SRPS EN
ISO 2409 [16],
Elasticity - cupping test, using device Erichsen 202,
according to SRPS EN ISO 1520 [17],
Hardness, according to Kenig, Erichsen,
Resistance to impact of steel balls, using device Erichsen
273 D,

Picture 2. The Modular Compact Rheometer MCR-302

Flexibility - bend test (cylindrical mandrel), using device


Erichsen 266, according to SRPS EN ISO 1519 [18],

2.5. Camouflage properties - IR reflection and


colorimetry

Abrasion resistance, using device Erichsen 251/I.

2.3. Resistance to aggressive media

The spectrophotometric measurements were carried out in


the Military Technical Institute of Belgrade, Department of
Materials and Protection. The measurements were
conducted using the UV/VIS/NIR spectrophotometer
UV 3600 from a Japanese manufacturer Shimadzu with an
integrating sphere [20]. For reflectance measurements
samples were measured in the visible and near-infrared area
of the electromagnetic spectrum (650-1000 nm) using UV
Probe programme package, while colour coordinates were
measured in visible part of the electromagnetic spectrum
(380-780 nm) using Colour programme package with 10
and D65 observer [20].

The persistance of the examined polyurethane camouflage


paint in different agressive media is tested. The steel plates
with the paint applied and dried were immersed in:
water
water solution of sodium-chloride (3.5 wt.%),
mixture of carbohydrates (80 vol.% of isooctane and 20
vol.% of benzene),
mineral oil.
After required time in these media, the steel plates were
taken out, dried and observed in order to compare the
appearence of the coatings after the exposure to agressive
solutions.

3. RESULTS AND DISCUSSION


3.1. Physical-mechanical properties of the
examined paints

2.4. Dynamic mechanical analysis rheological


properties analysis

The results of comparative examinations of PUR black with


and without IF-WS2 according to military standards are
presented in Table 1.

Dynamic mechanical analysis (DMA) of the investigated


samples was performed using the Modular Compact
610

OTEH2016

EFFECTOFIFWS2NANOPARTICLESADDITIONONPHYSICALMECHANICALANDRHEOLOGICALPROPERTIES

Table 1. Results of physical-mechanical examinations


Examination method standard

Technologicalapplication
characteristics
Appearence of paint before
application
Behavior during dissolving and
application
Physical-mechanical
characteristics
Stickness
(cross-cut test), 1 mm
Hardness, s
(Kenig pendulum)
Flexibility, 6 mm
Elasticity, mm
Resistance to impact of steel balls

Sample results
Required quality
PUR black

PUR black +
1wt.% IF-WS2

from 0 (bad) to 4 (good)

4.7.

> 80

178

202

6.24.
6.23.

4.11.
4.10.

No cracks
6

6.20.

4.8.

Needs to resist 4500 steel balls

+
+
+
(5000)

+
+
+
(9500)

SORS 1634

SORS 1564

6.1.

4.1.

6.8.

4.2.

6.14.

4.5.

6.19.

No glelatinization, nor formation of


surface scum, clots or blobs
Homogenious coating, uniform
thicknes

0.1 multiplied with the thickness of


1L*
the coating sample
(+) meets the required criteria
(-) does not meet the required criteria
* both samples failed this test, but the result after same amount of sand is comapred

Abrasion resistance, L/m

6.22.

4.9.

1L*

It may be observed that both paints, with and without


nanoparticles, have good technological application
characteristics. Regarding the physical-mechanical resistance,
the sample containing nanofiller has improved behaviour.
Stickness, elasticity and flexibility of both samples satisfy
the requirements of the standards. Hardness is significantly
superior for the sample containing IF-WS2. Resistance to
impact of steel balls was far better for the sample with
nanoparticles. According to standard, it is necessary that the
coating applied on a steel plate resists the impact of 4500 of
steel balls, and, as shown in the table, the sample with
nanoparticles resisted the impact of 9500 balls twice more
than the sample without nanoparticles. Abrasion resistance
is tested using a defined granulation of sand falling from
defined height. The amount of send for each coating sample
is defined regarding the thickness of the tested coating
sample. In this research, both samples have failed, but for
the comparison, the appearance of the samples after the
same amount of sand is taken. It has been shown that the
sample with nanoparticles has better abrasion resistance,
and Picture 3 ilustrates this. The fact that the both tested
samples have failed this test may be a consequence of the
application technique used in this experiment.

Picture 3. Abrasion resistance test:


a) PUR black b) PUR black + 1 wt.% IF-WS2

3.2. Resistance to aggressive media


Results of these examinations are given in the Table 4.
Both the tested samples have met the criteria of the tests
the coatings are persistant in all the used agressive media.

Table 2. Persistance of coatings in aggressive media


Agressive medium
water
water solution of sodiumchloride (3.5 wt.%)
mixture of carbohydrates
isooctane / benzene (80 vol.% /
20 vol.%), p.a.
Mineral oil

Methodology

Examined
property

Criteria

Immersion, t=232C,
168 h
Immersion, t=232C,
120 h

No changes in
Appearance, appearance
Immersion, t=232C,
stickness and stickness
24 h
Immersion, t=105C,
96 h
(+) meets the required criteria
(-) does not meet the required criteria

611

PUR black

Sample results
PUR black + 1wt.% IF-WS2

OTEH2016

EFFECTOFIFWS2NANOPARTICLESADDITIONONPHYSICALMECHANICALANDRHEOLOGICALPROPERTIES

this terms we would suggest usage of dye with


nanoparticles. As seen in fig. 5, there is a difference in the
reflectance curve behaviour in the 800-950 nm wavelength
area. This finding demands further physical and chemical
analysis. Nevertheless, dye in which nanoparticles are added
shows more uniformity in the given electromagnetic
spectrum area and therefore we have another reason to
believe that its usage in camouflage protection could be
beneficial.

3.3. DMA results - rheological properties of the

examined paints
The results of DMA - curves of viscosity as a function of
shear rate, are presented in Picture 4.

In these terms we performed another test and we measured


colour coordinates. The results are presented in Table 3.
Table 3. Colour coordinates defined in CIE system for
both samples
Color coordinates
With nanoparticles
Without nanoparticles

L*
27.09
26.28

a*
0.48
0.4

b*
0.11
0.1

The main characteristic these data provide is so called E


difference which is number that is calculated with equation
(1):
E = L*2 + a*2 + b*2

Picture 4. Viscosity shear rate dependence

If the E is less than 1, the difference between two dyes is


not visible to the observers eye. In our case E=0.81.

The results indicate that addition of IF-WS2 nanoparticles


has an impact on the paints rheology viscosity of the paint
gets higher: paint without nanoparticles has viscosity
=29.62 mPas, and the paint containing 1 wt.% of IF-WS2
has viscosity =71.77 mPas.

To sum up all the spectrophotometric data presented here,


the dye with nano particle addition has better qualities and
the difference between the two samples is not visible to the
observers eye. Therefore, these results give us a reason to
proceed with the examination and characterization of
nanoparticle dyes.

3.4. IR reflection and colorimetric results


Picture 5 shows dependency graph for diffuse reflection as
the function of wavelength in the area of 650-1000 nm for
the black tone, with and without nanoparticles.

4. CONCLUSION
Physical-mechanical and rheological properties of
polymeric coating used for military camouflage purposes
were enhanced by adding IF-WS2 nanoparticles in a small
concentration (1wt.%).
It is shown that mechanical properties are improved:

without nano particles


with nano particles

hardness and resistance to impact of steel balls were


significantly improved for the samples with nanoparticles
comparing to the sample without them,

5.4

reflectance (%)

(1)

stickness, elasticity and flexibility of both samples


satisfied the requirements of the relevant military
standards,

4.8

abrasion resistance of both samples failed the test,


however, abrasion resistance is significantly better for the
sample containing nanoparticles of IF-WS2.

4.2
800

1000

wavelength (nm)

Both the tested samples have met the criteria of the tests
the coatings are persistant in all the used agressive media:
water, salt water, mineral oil and a mixture of isooctane and
benzene.

Picture 5. Dependency graph for diffuse reflection in the


range between 650-1000 nm for the black tone (dye with
and without nanoparticles)

Rheological behavior of paints with nanofiller added has


shown that viscosity increases with shear rate for the sample
with nanoparticles.

According to Serbian military standard for values of diffuse


reflection in the visible-near infra-red region for the black
tone, SORS 7511/11 [21], both of dyes have good and
acceptable camouflage behaviour in the given area of
electromagnetic spectrum. However, the greater the
reflectance value the better camouflage protection is, so in

Spectrophotometric examinations indicate that the dye with


nanoparticle addition has better camouflage qualities and the
difference between the two samples is not visible to the
612

EFFECTOFIFWS2NANOPARTICLESADDITIONONPHYSICALMECHANICALANDRHEOLOGICALPROPERTIES

observers eye. These obtained and presented results


encourage application of IF-WS2 as nanofiller for this kind
of military paints in order to improve their resistance and
general properties, and give us a reason to proceed with the
examination and characterization of coatings with
nanoparticles.

OTEH2016

submicron size WS2 fillers, Tribol. Int. 80 (2014) 172


178.
[8] Y. Q. Zhu, T. Sekine, B. S. Kieren, S. Firth, R. Tenne,
R. Rosentsveig, H. W. Kroto, D. R. Walton, ShockWave Resistance of WS2 Nanotubes, J. Am. Chem.
Soc. 125 (2003) 13291330.
[9] Neme Patrick Ioan, Nanocomposite coatings for
anticorrosion protection of some metals, PhD Thesis,
Universitatea Babe-Bolyai, Cluj Napoca, 2013.
[10] O. Eidelman, H. Friedman, R. Rosentsveig, A.
Moshkovith, V. Perfiliev, S. R. Cohen, Y. Feldman, L.
Rapoport, R. Tenne, Chromium-rich coatings with
WS2 nanoparticles containing fullerene-like structure,
NANO: Brief Reports and Reviews, Vol. 6, No. 4
(2011) 313324., DOI: 10.1142/S1793292011002755
[11] X. G. Hu, W. J. Cai, Y. F. Xu , J. C. Wan, X. J. Sun,
Electroless NiP(nano-MoS2) composite coatings and
their corrosion properties, Surface Engineering 2009,
Vol
25,
No
5,
361-366,
DOI:
10.1179/174329408X282532
[12] De Meyer T., Steyaert I., Hemelsoet K., Hoogenboom
R., Van Speybroeck V., De Clerck K., Halochromic
properties of sulfonphthaleine dyes in a textile
environment: The influence of substituents, Dyes and
Pigments, 124 (2016) 249-257.
[13] Gulmini M., Idone A., Diana E., Gastaldi D., Vaudan
D., Aceto M., Identification of dyestuffs in historical
textiles: Strong and weak points of a non-invasive
approach, Dyes and Pigments, 98 (2013) 136-145.
[14] Anton Paar Germany GmbH, MCR: The Modular
Compact Rheometer Series, 2016.
[15] SORS 1564 Boja poliuretanska dvokomponentna
maskirna
[16] SORS 1634 Metode ispitivanja boja, lakova,
razreivaa i srodnih materijala
[17] SRPS EN ISO 2409 Boje i lakovi- Ispitivanje
unakrsnim prosecanjem
[18] SRPS EN ISO 1520 Boje i lakovi- Ispitivanje dubokim
izvlaenjem
[19] SRPS EN ISO 1519 Boje i lakovi- Ispitivanje
savijanjem (cilindricni trn)
[20] Shimadzu, UV 3600 Tutorial, 2009.
[21] SORS 7511/11 Refleksija maskirnih materijala u UV,
V i BIC podruju EM spectra

ACKNOWLEDGEMENT
The authors thank to the Ministry of Education, Science and
Technological Development of Republic of Serbia for the
financial support of the research through the project TR
34034. We also thank to Company PITURA d.o.o.

References
[1] E. Garca-Lecina, I. Garca-Urrutia, J.A. Dez, J.
Fornell, E. Pellicer, J. Sort, Codeposition of inorganic
fullerene-like
WS2
nanoparticles
in
an
electrodeposited nickel matrix under the influence of
ultrasonic agitation, Electrochimica Acta 114 (2013)
859 867.
[2] Chen Shahar, David Zbaida, Lev Rapoport, Hagai
Cohen, Tatyana Bendikov, Johny Tannous, Fabrice
Dassenoy, R. Tenne, Surface Functionalization of WS2
Fullerene-like Nanoparticles, Langmuir 2010, 26(6),
44094414 DOI: 10.1021/la903459t
[3] R. Tenne, L. Margulis, M. Genut, G. Hodes, Polyhedral
and Cylindrical Structures of Tungsten Disulfide,
Nature, 360 (1992) 444-446.
[4] O. Tevet, Mechanical and tribological properties of
inorganic fullerene-like (IF) nanoparticles, Weizmann
Institute of science, Rehovot, Israel, 2011.
[5] O. Tevet, P. Von-Huth, R. Popovitz-Biro, R.
Rosentsveig, H. D. Wagner, R. Tenne, Friction
mechanism of individual multilayered nanoparticles,
Proc. Natl. Acad. Sci. U.S.A., 108 (2011), 1990119906.
[6] F. Xu, Large Scale Manufacturing of IF-WS2
Nanomaterials and Their Application in Polymer
Nanocomposites, University of Exeter, Devon, UK,
2013
(https://ore.exeter.ac.uk/repository/handle/10871/8986,
last date of access 2nd January 2016).
[7] HuLin Li, ZhongWei Yin , Dan Jiang, YaJun Huo,
YuQing Cui, Tribological behavior of hybrid
PTFE/Kevlar fabric composites with nano-Si3N4 and

613

THERMAL ANALYSIS OF NANOCRYSTALLINE NiFe2O4 PHASE


FORMATION IN SOLID STATE REACTION
VLADAN OSOVI
Institute of Chemistry, Technology and Metallurgy, University of Belgrade, Njegoeva 12, 11000 Belgrade, Serbia,
vlada@tmf.bg.ac.rs
ALEKSANDAR OSOVI
Institute for Technology of Nuclear and Other Mineral Raw Materials, Franse d Eperea 86, 11000 Belgrade, Serbia,
a.cosovic@itnms.ac.rs
TOM K
Institute of Physics of Materials AS CR, v.v.i., ikova 22, CZ-616 62 Brno, Czech Republic, zak@ipm.cz
NADEDA TALIJAN
Institute of Chemistry, Technology and Metallurgy, University of Belgrade, Njegoeva 12, 11000 Belgrade, Serbia,
ntalijan@tmf.bg.ac.rs
DUKO MINI
University of Pritina, Faculty of Technical Sciences, Knjaza Miloa 7, 38220 Kosovska Mitrovica Serbia,
dminic65@open.telekom.rs
DRAGANA IVKOVI
Technical Faculty in Bor, University of Belgrade, Vojske Jugoslavije 12, 19210 Bor, Serbia, dzivkovic@tf.bor.ac.rs

Abstract: Formation of nanocrystalline magnetic NiFe2O4 phase via a low-cost reaction in a solid state was studied using
combined differential thermal analysis (DTA) and thermogravimetric analysis (TGA) followed by additional
thermomagnetic measurements (TM). The observed thermal effects DTA/TGA and considerable increase of an overall
magnetic moment during TM were associated with the changes in phase composition and development of the magnetic
NiFe2O4 structure. The findings were additionally supported by the results of subsequent phase composition analysis using
X-ray diffraction (XRD) and 57Fe Mssbauer spectroscopy (MS). Nanocrystalline structure of the obtained ferrite phase was
confirmed by XRD and electron microscopy analyses. The synthesized material was further magnetically characterized by
room temperature hysteresis loop measurements on a vibrating sample magnetometer (VSM). The recorded TM curves and
hysteresis loop both reveal evident increase in volume of the magnetic phase and thus additionally support the prior
findings.
Keywords: nanocrystalline NiFe2O4, DTA/TG, themomagnetic measurements, phase composition, magnetic properties.
at higher temperatures. Moreover, studies show that
decomposition of hydroxides to oxides during a solid-state
reaction is accompanied by release of a significant amount
of heat. As a result, specific structural and magnetic
properties of nanocrystalline ferrites [8] may be lost through
grain growth. Nonetheless, the unwanted side effects during
the solid-state reaction can be to some extent diminished by
addition of different grain growth inhibitors e.g. NaCl [3,9].
Consequently, the detailed knowledge on reaction route and
phase formation is rather important.

1. INTRODUCTION
Extensive investigations in the field of nanotechnology in
recent years have extended already wide field of application
of Ni-ferrite to new areas such as hydrogen production, gas
sensing, catalysis and optoelectronics [1-3]. It is widely
accepted that basic functional properties of Ni-ferrites are
heavily influenced by their morphology, structure and phase
composition which in turn are determined by an applied
synthesis route [4-6]. More to the point, differences between
metal oxides prepared by grinding of solid metallic salts
with NaOH and the ones prepared from hydroxides in a
solution are reported in literature [3,7]. Conventional solid
state reactions for preparation of Ni-ferrites are
characterized by a common disadvantage that formation of a
polycrystalline ferrite phase with large crystallites is favored

In the present study, the formation of nanocrystalline


NiFe2O4 phase through a low-cost solid-state reaction was
investigated. The prepared nickel ferrite material was
studied and discussed in terms of its structure, composition
and magnetic properties.

614

THERMALANALYSISOFNANOCRYSTALLINENIFE2O4PHASEFORMATIONINSOLIDSTATEREACTION

OTEH2016

flowing nitrogen atmosphere. Alumina crucibles were used


both for a powder sample and as a reference material.
Thermomagnetic measurements (TM) were carried out on a
EG&G vibrating sample magnetometer (VSM) under
vacuum by applying field of 4 kA/m. The applied procedure
included heating at 4 C/min up to 800C and dwell time of
30 min at the maximum temperature. Method of
interpretation of the obtained curve is given in more detail
in [10]. Phase composition of the obtained ferrite Ni-ferrite
powder was studied using X-ray diffraction analysis (XRD)
at ambient temperature. Rietweld refinement, quantitative
analysis and crystallite size determination of the identified
phases were carried out using HighScore plus software and
ICSD data. Additional phase composition analysis was
performed using room temperature 57Fe Mssbauer
spectroscopy (MS). Mssbauer spectrum was recorded in
the standard transmission geometry using 57Co(Rh) as a
source while calibration was carried out against -Fe foil.
CONFIT software package [11] was used for the spectrum
deconvolution and fitting. Structure and morphology of the
prepared Ni-ferrite powder were studied using field
emission scanning electron microscopy (FE-SEM) and
transmission electron microscopy (TEM). VSM with
magnetic field strength of 800 kA/m was employed for
measurements of room temperature magnetic properties of
the obtained ferrite powder.

2. EXPERIMENTAL
Analytical grade precursors: NiSO4 6H2O, Fe(NO3)3
9H2O, NaOH and NaCl were used for preparation of
nanocrystalline Ni-ferrite powder. A low-cost synthesis
route was applied, which is essentially modified preparation
route developed by Darshane et al. [3] that is based on the
following reaction in a solid state:
NiSO4 6H2O+2NaOH
NiO+Na 2SO4 +7H2O
2Fe ( NO3 )3 6H2 O+6NaOH
Fe2O3 +6NaNO3 +12H2 O
T
NiO+Fe2 O3
NiFe2 O4

With respect to stoichiometry and the pseudo binary phase


diagram, the solid precursors were mixed in the molar ratio
of 1:2:8:10 and ground together for 60 min. The sole role of
NaCl was to inhibit the grain growth during the applied
synthesis reaction. All through the mixing process,
exothermal reaction takes place yielding in mixture of
nickel and ferrite oxides. Since the formed oxides react with
each other at elevated temperatures the formation of
magnetic NiFe2O4 phase was studied by means of combined
differential thermal analysis (DTA), thermal gravimetric
analysis (TG) and thermomagnetic measurements (TM).
The produced powder was then crushed and washed with
deionized water in order to remove the NaCl. The obtained
ferrite powders were analyzed and discussed through
structural, compositional and magnetic characterization.

3. RESULTS AND DISCUSSION

Thermal effects during synthesis were analyzed by


combined DTA and TG techniques using TA instruments
SDT Q600 thermal analyzer. The measurements were
carried out up to 900oC at heating rate of 20C/min under

In order to study the formation of NiFe2O4 phase through


thermal effects and weight changes during heat treatment
the combined DTA/TG analysis was carried out up to
900oC. The obtained curves are presented on Picture 1.

Picture 1. DTA/TG curves demonstrating formation of NiFe2O4 phase

615

THERMALANALYSISOFNANOCRYSTALLINENIFE2O4PHASEFORMATIONINSOLIDSTATEREACTION

According to theory, the oxides react in succeeding stages


which start at temperatures that are much below the range
where spinel phase formation can be detected [12,13]. The
observed thermal effects from about 230 to 400 oC can be
related to the beginning of localized ferrite formation by
surface diffusion. The spinel phase is firstly formed at the
favorable sites and between 500 and 600oC a fully coherent
layer is formed and the onset lattice penetration can be
detected [13]. The small exothermic effect between 670700oC (peak at 686oC) can be attributed to interaction by
volume diffusion [12]. By 800oC the whole reaction is
practically finished and the ferrite phase should be
detectable by XRD. Further increase of temperature on one
hand favors crystal growth and thus reduces imperfections
[12] but on the other, if excessive, it may also lead to
sintering of the obtained finely powdered ferrite. The broad
temperature range in which the reaction is believed to occur
is ascribed to the reactivity of oxides which is influenced by
purity, surface area, contact area and surface conditions that
facilitate surface diffusion process [14].

OTEH2016

be reasonably ascribed to the increase in a volume of


magnetically active material i.e. formation of ferromagnetic
NiFe2O4 phase as well as to a overall decrease of thermal
energy and field cooling process[16].
According to obtained results of XRD analysis, presented in
Table 1. the obtained ferrite powder after TM measurements
up to 800oC consists of NiFe2O4 phase and metallic Ni.
Given that the grain growth during the solid-state reaction
was inhibited by the NaCl the estimated crystallite sizes of
25 and 45 nm, respectively are somewhat expected. The
presence of the Fe2O3 and Fe3O4 phases in the nickel ferrite
sample was considered as well. Especially, given that most
of their peaks overlap those of NiFe2O4. However, peaks
which are specific only for these two phases are either
absent or too weak which suggests that the phases are not
present or that their amount is practically negligible.
Table 1. Phase composition and corresponding crystalline
sizes determined by XRD
Phase
NiFe2O4
Ni

From literature [15] it is known that reaction of NiO and


Fe2O3 powders is accompanied by significant release of
oxygen which can be observed on TGA curve. The observed
weight changes, at least at the initial stages up to 200oC, can
be to some extent also related to removal of the bound
water. According to literature [15] such significant oxygen
loss may be attributed to the fact that ferrite formation
progresses through some intermediate oxygen-deficient
phase that is a form of magnetite.

Content (wt.%)
89
11

Crystallite size (nm)


25
45

For better insight into the structure and composition of the


prepared ferrite powder after TM the 57Fe Mssbauer
spectroscopic analysis was carried out and the obtained
spectrum is presented in Picture 3. The results of MS
suggest that nearly 60% of the iron atoms are in positions
that correspond to regular Ni-ferrite structure while the rest
of them are on some redundant tetrahedral positions or on
positions where they do not have the supposed number of
surrounding iron atoms i.e. not fully octahedral.

In addition to DTA/TG analysis, themomagnetic


measurements up to 800oC were utilized to observe NiFe2O4
phase formation. The recorded TM curves presented in
Picture 2 illustrate temperature dependence of a total
magnetic moment of the studied powder sample under an
applied external magnetic field.

Picture 2. Recorded thermomagnetic curves

Picture 3. Mssbauer spectrum of the obtained Ni-ferrite


powder after TM

Considering that the starting oxide mixture is essentially


nonmagnetic/nonmagnetized material in the heating stage of
the measurements i.e. in temperature interval ambient to 800
o
C, no significant changes of the total magnetic moment can
be observed. On the other hand, the cooling curve quite the
opposite exhibits a substantial increase of the total magnetic
moment with a decrease of temperature. Such increase can

This can be attributed to the determined phase composition


and calculated crystallite sizes as it is known that a ratio of
the tetrahedral (A) and octahedral (B) sites in the ultrafine
ferrite powders may differ to some extent from that of bulk
materials [8]. It is widely accepted that small particle sizes
promote a mixed spinel structure whereas bulk materials
preferentially have an inverse spinel structure [17].
616

THERMALANALYSISOFNANOCRYSTALLINENIFE2O4PHASEFORMATIONINSOLIDSTATEREACTION

Moreover, similar studies on Ni-ferrite suggest a possibility


that the powder sample is either non-stoichiometric or not
completely inverse [18], which can explain the presence of
metallic nickel in the prepared ferrite powder determined by
XRD. In addition, a limited Ni2+ substitution for Fe2+ as a
consequence of the used solid-state reaction may also be
considered. As in case of XRD the presence of other oxide
iron phases was not confirmed by MS phase analysis.

OTEH2016

powder with an average particle size consistent with the


crystallite size determined by XRD.
Besides its composition and structure, the prepared Niferrite powder after TM was characterized in terms of its
magnetic properties as well.

The obtained FE-SEM and TEM micrographs presented in


Picture 4 illustrate morphology and structure of the Ni-ferrite
powder after thermomagnetic measurements up to 800oC. The
presence of nanoscale NiFe2O4 particles cannot be clearly
observed on the FE-SEM image (Pic.4a) due to the fact that
they form considerably larger agglomerates.

Picture 5. Room temperature hysteresis loop of the


prepared Ni-ferrite powder after TM
The obtained room temperature hysteresis loop presented on
Picture 5. has characteristic S shape that demonstrates
magnetic quality of a magnetically soft material. This is in
line with the results of structural and compositional
characterization which have confirmed formation and
presence of nanocrystalline ferrimagnetic NiFe2O4 phase.

a)

5. CONCLUSION
Formation of nanocrystalline NiFe2O4 phase via a solid-state
reaction route was studied using combined DTA/TG and
TM techniques. The obtained DTA/TG and TM curves
enabled us to observe formation of the desired magnetic
phase through recorded thermal, weight and magnetic
moment changes whereas analysis of the corresponding
curves provided insight into mechanism of the reaction.
The results of thermal analysis suggest that the reaction
occurs in a wide temperature range and that it is practically
over up to 800oC while further increase of temperature only
promotes grain growth and sintering of the obtained
ultrafine powder. The notable weight loss was ascribed to a
release of oxygen during reaction.
The substantial increase of the total magnetic moment with
a decrease of temperature determined by TM was attributed
to formation of ferrimagnetic NiFe2O4 phase during the
heating stage of the measurements as well as to an overall
decrease of thermal energy and field cooling during the
subsequent cooling stage.

b)
Picture 4. Morphology of the prepared ferrite powder
after TM: a) SEM image and b) TEM image

The results of subsequent XRD and MS phase analyses have


confirmed formation of NiFe2O4 phase while the results of
FE-SEM and TEM along with crystallite size determined by
XRD have confirmed its nanocrystalline structure. These
findings were additionally supported by the obtained room

True morphology of the obtained powder can be much


better observed on the TEM micrograph (Pic. 4b) that
clearly shows individual nanoparticles of the Ni-ferrite
617

THERMALANALYSISOFNANOCRYSTALLINENIFE2O4PHASEFORMATIONINSOLIDSTATEREACTION

OTEH2016

[7] Ye,X.R., Jia,D.Z., Yu,J.Q., Xin,X.Q., Xue,Z.L.: Onestep solid-state reactions at ambient temperaturesa
novel approach to nanocrystals synthesis, Advanced
Materials, 11(11) (1999) 941-942.
[8] Kodama,T., Wada,Y., Yamamoto,T., Tsuji,M.,
Tamamura,Y.: Synthesis and characterization of
ultrafine nickel(II)-bearing ferrites (NixFe3xO4, x=
0.141.0), J. Mater. Chem., 5(9) (1995) 1413-1418.
[9] Wiley,J.B., Kaner,R.B.: Rapid solid-state precursor
synthesis of materials, Science, 255 (1992) 1093-1097.
[10] Talijan,N., osovi,V., k,T., Gruji,A., StajiTroi,J.: Structural and Phase Composition
Modification of Nanocrystalline Nd14Fe79B7 Alloy
During Thermomagnetic Measurements, J. Min.
Metall. B, 45B (1) (2009) 111-119.
[11] k,T., Jirskov,Y.: CONFIT: Mssbauer Spectra fitting
program, Surf. Interface Anal., 38 (2006) 710-714.
[12] Brown,M.E.: Introduction to Thermal Analysis:
Techniques and Applications, Chapman and Hall Ltd.
London & New York, 1988.
[13] Brown,M.E.,
Dollimore,D.,
Galwey,A.K.:
Comprehensive Chemical Kinetics, Eds C.H. Bamford,
C.F.H. Tipper, Vol. 22 Reactions in the Solid State,
Elsevier scientific Publishing Company, AmsterdamOxford-New York, 1980.
[14] Singh,B.N.,
Banerjee,R.K.,
Arora,B.R.:
Thermoanalytical study of the solid state reaction
between hydrated ZnO and Fe2O3, Journal of Thermal
analysis, 18 (1980) 5-13.
[15] Elwell,D., Parker,R., Tinsley,C.J.: The formation of
nickel ferrite, Czechoslovak Journal of Physics B,
17(4) (1967) 382-386.
[16] k,T., osovi,V., osovi,A., David,B., Talijan,N.,
ivkovi,D.: Formation of Magnetic Microstructure of
the Nanosized NiFe2O4 Synthesized Via Solid-State
Reaction, Science of Sintering, 44(1) (2012) 103-112.
[17] Ceylan,A., Ozcan,S., Ni,C., Shah,S.I.: Solid state
reaction synthesis of NiFe2O4 nanoparticles, J. Magn.
Magn. Mater., 320(6) (2008) 857-863.
[18] Morrish,A.H., Haneda,K.: Magnetic structure of small
NiFe2O4 particles, J. Appl. Phys., 52(3) (1981) 24962498.

temperature hysteresis loop which demonstrates presence of


a soft magnetic material.

ACKNOWLEDGEMENT
This work has been supported by the Ministry of Education,
Science and Technological Development of the Republic of
Serbia (Projects: ON 172037 and TR 34023) and by
European Regional Development Fund through CEITECCentral European Institute of Technology (project
CZ.1.05/1.1.00/02.0068). The presented work is carried out
through joint scientific cooperation of the Serbian Academy
of Sciences and Arts and the Academy of Sciences of the
Czech Republic under project: Research and development of
functional nanomaterials for various applications.

References
[1] Fresno,F., Yoshida,T., Gokon,N., FernandezSaavedra,R., Kodama,T.: Comparative study of the
activity of nickel ferrites for solar hydrogen production
by two-step thermochemical cycles, Int. J. Hydrogen
Energ., 35 (2010) 8503-8510.
[2] Lin,K.S., Adhikari,A.K., Tsai,Z.Y., Chen,Y.P.,
Chien,T.T., Tsai,H.B.: Synthesis and characterization
of nickel ferrite nanocatalysts for CO2 decomposition,
Catalysis Today, 174(1) (2011) 88-96.
[3] Darshane,S.L.,
Suryavanshi,S.S.,
Mulla,I.S.:
Nanostructured nickel ferrite: A liquid petroleum gas
sensor, Ceramics International, 35 (2009) 17931797.
[4] Dolia,S.N., Sharma,R., Sharma,M.P., Saxena,N.S.:
Synthesis, X-ray diffraction and optical band gap study
of nanoparticles of NiFe2O4, Indian Journal of Pure &
Applied Physics, 44 (2006) 774-776.
[5] Tiwary,R.K., Narayan,S.P., Pandey,O.P.: Preparation
of strontium hexaferrite magnets from celestite and
blue dust by mechanochemical route, J. Min. Metall.
B, 44B (2008) 91-100.
[6] osovi,A.R., k,T., Glii,S.B., Soki,M.D.,
Lazarevi,S.S., osovi,V.R., Orlovi,A.M.: Synthesis
of nano-crystalline NiFe2O4 powders in subcritical
and supercritical ethanol, Journal of Supercritical
Fluids, 113 (2016) 96105.

618

PRELIMINARY ANALYSIS OF THE POSSIBILITY OF PREPARING


PVB/IF-WS2 COMPOSITES. EFFECT OF NANOPARTICLES ADDITION
ON THERMAL AND RHEOLOGICAL BEHAVIOR OF PVB
DANICA M. SIMI
Military Technical Institute, Belgrade, simic_danica@yahoo.com
DUICA B. STOJANOVI
University of Belgrade, Faculty of Technology and Metalurgy, duca@tmf.bg.ac.rs
MIRJANA DIMI
Military Technical Institute, Belgrade, mirjanadimicjevtic@gmail.com
LJUBICA TOTOVSKI
Military Technical Institute, Belgrade, ljtotovski@gmail.com
SAA BRZI
Military Technical Institute, Belgrade, sasabrzic@gmail.com
PETAR S. USKOKOVI
University of Belgrade, Faculty of Technology and Metalurgy, puskokovic@tmf.bg.ac.rs
RADOSLAV R. ALEKSI
University of Belgrade, Faculty of Technology and Metalurgy

Abstract: A possibility of using inorganic fullerene-like tungsten disulfide, IF-WS2, nanoparticles as a filler in poly
(vinyl butyral), PVB, for improving its thermal and rheological properties is examined. PVB is a thermoplastic polymer
with excellent properties, widely used: in ballistic protection, for protection of safety glass, in metal primers and
coatings, temporary binders. Two different molecular weights of PVB were previously examined in this research:
Mowital B60H and B75H. Both grades of PVB were dissolved in different solvents: ethanol and 2-propanol. Thin films
were prepared by solvent casting technique. The glass transition temperature (Tg) of the tested samples was determined
using differential scanning calorimetry (DSC), at three different heating rates (5C/min, 10C/min and 20C/min).
After choosing a solvent and PVB grade, IF-WS2 nanoparticles were added to PVB solutions and dispersed by
ultrasonic irradiation. Compatibility, i.e. interaction of IF-WS2 with the dissolved PVB was examined by
microcalorimetry method. The nanoparticles dispersion and deagglomeration in matrix of PVB was analyzed by
scanning electron microscope (SEM). The effect of IF-WS2 on rheological properties of the chosen samples has been
examined using Dynamic Mechanical Thermal Analysis (DMTA), observing storage modulus, loss modulus and the loss
factor as functions of temperature for the tested composites.
Keywords: inorganic fullerene-like tungsten disulfide, nanoparticles, poly(vinyl butyral), compatibility, DMTA, DSC.
compatible with various systems, so incorporating them
into a proper matrix may lead to composites with new
properties. It is shown that with low IF-WS2 additions
into a polymer matrix the composites exhibited improved
thermal, rheological, and mechanical properties,
compared to their neat polymers [2]. Different
thermoplastic polymers or thermosetting resins are
examined in combination WS2 as reinforcement, for the
purposes of development of new advanced composites
with improved mechanical properties, for high technology
applications [1-7]. Multilayer tungsten disulfide has
shown outstanding shock resistance properties superior to
those of even carbon nanotubes [3, 4, 8].

1. INTRODUCTION
Inorganic fullerene-like nanoparticles of transition-metal
dichalogenides, having spherical closed structure, are well
known for their excellent mechanical properties. They are
used for a wide range of applications, including aerospace
and automotive technology, in different mechanisms for
load bearing and release, for corrosion protection, and as
solid lubricants [1]. Tungsten disulfide, as one among
them, is extensively studied for its ability to control
wetting, adhesion, lubrication on surfaces and interfaces,
and is recognized as promising filler of the composite
materials. Inorganic fullerene-like tungsten disulfide
nanoparticles (IF-WS2) are thermally stable and
618

PRELIMINARYANALYSISOFTHEPOSSIBILITYOFPREPARINGPVB/IFWS2COMPOSITES.EFFECTOFNANOPARTICLESADDITION

In this paper a possibility of using IF-WS2 nanoparticles


as a filler in poly (vinyl butyral), PVB, for improving its
thermal and rheological properties is examined. PVB is a
thermoplastic polymer with excellent properties, widely
used: for protection of safety glass, in metal primers and
different types of protective coatings, temporary binders,
and nowadays it is widely used in ballistic protection [911]. PVB is a tough polymer with excellent flexibility,
broad compatibility with modifying resins and additives,
non toxic and low odor, good adhesion to many substrates
and strong binding, impact resistance, good tensile
strength and elasticity, freezing and aging resistance, well
soluble in alcohols and many other organic solvents, fast
drying, fast solvent release and low solvent retention,
good film formation, transparent and colorless [9].

OTEH2016

2.5. Microcalorimetry analysis method


All spontaneous chemical and physical processes are
associated with heat effects. Monitoring the flow of heat
may be used to estimate interactions between different
materials in touch, or study their compatibility. This
method is widely accepted in laboratories that do the
testing of chemical compatibility of explosives and
polymer and other materials using heat flow calorimeter,
which is described in standard STANAG 4147 [13].
However, microcalorimetry may as well be used to
observe degree of interactions between any different
materials.
In this research, these tests were performed using heat
flow calorimeter TAM III, TA Instruments. Samples
(nanoparticles, PVB powder, PVB dissolved in ethanol
and 2-propanol, and the mixtures of nanoparticles with
pure or dissolved PVB) were heated for 456 hours at
75C. The released heat in time is compared with
reference value, which represents the sum of the heat
released when these materials are heated separately.
Based on the results of measurements, the energy released
per unit of mass for examined materials is determined,
separately and for mixtures thereof. The coefficient D is
calculated, as a relative measure of interaction between
the tested materials, using equation (1):

2. MATERIALS AND EXPERIMENTAL


PROCEDURE
2.1. Preparation of composite material samples
Two different molecular weights of fine-grained white
powder PVB (Mowital, Kuraray GMBH) were examined
in this research: Mowital B60H and B75H. PVB in
content of 10 wt.% was dissolved in two solvents: ethanol
and 2-propanol. Thin films were prepared and the glass
transition temperature (Tg) of the tested samples was
determined using differential scanning calorimetry (DSC),
at three different heating rates (5C/min, 10C/min and
20C/min). Homogenization and complete dissolving of
PVB was done on the magnetic stirrer during 24 hours at
room temperature. Inorganic fullerene-like tungsten
disulfide nanoparticles (IF-WS2, NanoLub, d25C ~ 7.5
g/cm3) were added into the PVB/EtOH solution in content
of 1wt.% and 2wt.%, regarding the total mass of PVB in
the sample. Dispersion and particle deagglomeration was
achieved by ultrasonic irradiation during 30 min (Sonic
Vibra Cell VCX 750, 19 mm Ti horn, 20 kHz) at room
temperature. PVB/WS2 nanocomposites were prepared by
the solvent-casting technique. After homogenization,
mixtures were directly poured into previously prepared
flat-bottom dishes and left for the solvents to evaporate.
Then the dishes with the samples were placed into a
heating oven over the night, and into a vacuum heating
oven during one more night, so the prepared samples
turned into a solid thin films after complete amount of
solvents evaporated.

D = 2M / ( E + S )

(1)

where: heat generation of the mixture, /g;


heat generation of the nanoparticles, /g;
S heat generation of the polymer / solution, /g.

2.3. SEM/EDS analysis


The quality of the nanoparticles dispersion and
deagglomeration in PVB matrix was analyzed by
scanning electron microscope (SEM), JEOL JSM-6610LV
connected with energy dispersive X-ray spectrometer
(EDS), OXFORD X-Max with Aztec software. The
samples were carbon coated and investigated under the
voltage of 10-15 kV. EDS analysis is carried out to
observe the morphology and the elemental composition of
the samples.

2.4. Dynamic mechanical thermal analysis


The effect of IF-WS2 on rheological and viscoelastic
properties of the chosen samples has been examined using
Dynamic Mechanical Thermal Analysis (DMTA).
Dynamic mechanical analysis of the investigated samples
was performed in a torsion deformation mode using the
Modular Compact Rheometer MCR-302 (Anton Paar
GmbH) equipped with standard fixtures (SRF12) for
rectangular bars, temperature chamber (CTD620) having
high temperature stability (0.1) [12]. The rectangular
samples of thin film composites had dimensions 54 mm
10 mm (0.3-0.4) mm. DMTA tests have included strain
amplitude sweep tests, in order to determine the linear
viscoelastic range (LVR) of the tested samples.

2.2. Differential Scanning Calorimetry


The glass transition temperature (Tg) of the prepared thin
film samples was determined using DSC Q20 (TA
Instruments), with data acquisition program Universal
V4.7A. The measurements were performed under a
nitrogen flow of 50 ml/min in the temperature range from
20C to 100C. The samples were first heated from 20 to
100C at a rate of 5, 10 and 20C/min, then cooled down
to 20C at the same rate, and then heated once again. Tg
from the first and the second heating, at the three heating
rates, were compared and analyzed.

619

PRELIMINARYANALYSISOFTHEPOSSIBILITYOFPREPARINGPVB/IFWS2COMPOSITES.EFFECTOFNANOPARTICLESADDITION

Strain sweeps are oscillatory tests performed at variable


amplitudes, while keeping the frequency at constant value
(also, the measuring temperature). These tests were
performed at T=20C, while shear strain, was varied
from 0.001% up to 10%, ie. from 0.00001 up to 0.1, with
41 equidistant values on a linear scale. Frequency was
held constant at 6.28 rads-1.

OTEH2016

3. RESULTS AND DISCUSSION


3.1. DSC results
For PVB grades B60H and B75H, for all the three heating
rates, registered Tg values, presented in Tables 1 and 2,
are much more uniform for the second heating, while for
the first heating there is a significant diference between
the obtained results. This is the most pronounced for PVB
dissolved in 2-propanol and for pure PVB powder.

In order to use DMTA technique to accurately determine


thermorheological properties of a material, it must be
deformed at amplitudes that remain within the linear
viscoelastic region (LVR). Within LVR, the viscoelastic
response of the polymer is independent of the magnitude of
deformation. As a general rule, this region must be
determined for every type of polymeric material by DMTA
amplitude sweep tests, in which a frequency is fixed and the
strain amplitude is incrementally increased. The strain sweep
test is the first step in dynamic mechanical analysis and is
always performed prior to a frequency sweep test in order to
determine an appropriate strain level for temperature and
frequency sweeps. The temperature ramp test involves
measurements of the storage and loss moduli and the loss
factor over a specified temperature range at constant strain
(or stress) amplitude and constant frequency. Temperature
ramp test was carried out in ramp fashion. In temperature
range from 20C to 120C the rheological parameters of the
viscoelastic properties of the prepared composites were
observed: the storage modulus (G'), loss modulus (G'') as a
function of temperature for the studied composites, and the
loss factor, tan()=G''/G'. The heating rate was 5C/min and
the single frequency point of 1 Hz was chosen. The glass
transition temperature (Tg), determined by the dynamic
mechanical measurements, was estimated as the temperature
at which the loss factor, tan() reached its maximum value.

It may be observed that higher Tg values are obtained in


2nd heating, for all the three speeds of heating. For the
first heating, at the heating rate of 20C/min, the highest
Tg is recorded for both solvents, for both PVB grades. In
the second heating, PVB dissolved in 2-propanol has the
highest registered Tg for heating rate 5C/min, while pure
PVB powder and PVB dissolved in ethanol have highest
Tg observed for the heating rate of 20C/min (Picture 1).

Picture 1. DSC at heating rates 5, 10 and 20C/min

Table 1. Tg of PVB B60H dissolved in different solvents, C


Heating rate,
C/min

Tg, C, for PVB B60H


PVB powder

PVB/EtOH

PVB/2-propanol

1st heating

2nd heating

1st heating

2nd heating

1st heating

2nd heating

58.13

67.02

51.88

58.47

41.61

60.60

10

63.34

66.60

53.59

57.12

45.42

59.75

20

65.32

68.61

58.09

61.62

50.96

57.32

EtOH ethanol
Table 2. Tg of PVB B75H dissolved in different solvents, C
Heating rate,
C/min
5
10
20

PVB powder
st
2nd heating
1 heating
57.44
56.59
66.05

Tg, C, for PVB B75H


PVB/EtOH
st
1 heating
2nd heating

73.84
73.89
75.48

50.39
52.70
57.97

PVB dissolved in ethanol has closer Tg values taken from


the first and second heating, for all the three heating rates,
i.e. more reproducible and reliable results are obtained for
this solvent.

59.48
64.73
63.75

PVB/2-propanol
1 heating
2nd heating
st

29.68
33.25
37.67

36.49
36.99
40.84

3.2. Microcalorimetry results


Individual measurements were performed for the release
of heat during microcalorimetry test for the nanoparticles,
pure PVB, PVB dissolved in ethanol and 2-propanol, and
for their mixtures.
620

PRELIMINARYANALYSISOFTHEPOSSIBILITYOFPREPARINGPVB/IFWS2COMPOSITES.EFFECTOFNANOPARTICLESADDITION

The representative curves of the registered heat flow are


shown in Picture 2, for samples with 2-propanol. The the
theoretical curve obtained by calculation is presented as
well. Table 3 shows the calculated values of the energy
for the individual materials tested and for the mixtures,
and based on that calculated their relative interaction
presented as D.

PVB/ethanol. Grade B60H was chosen for further work


for its faster dissolving.

3.2. SEM/EDS analysis results


SEM image of a thin film nanocomposite sample, 1wt.%
IF-WS2/PVB/EtOH, (Picture 3) shows that they are well
dispersed, but still somewhat agglomerated there may
be observed nanoparticles individually dispersed in the
matrix, with the occasional particles agglomerating
together. The EDS spectrum in Picture 4 confirms that the
bright dots consist of W and S, attributed to IF-WS2
nanoparticles (recorded under BSE mode).

35
PVB B60H(2-propanol)WS2
PVB B60H(2-propanol)
WS2
Theoretical PVB B60H(2-propanol)/WS2

30
25

OTEH2016

20
15
10
5
0
-5
-10
0

100

200

300

400

500

Time (h)

Picture 2. Heat flow curves for samples with 2-propanol


able 3. Values of released energy - microcalorimetry
results for PVB B60H/IF WS2
Sample
PVB powder
IF-WS2 nanoparticles
PVB/2-propanol/ IF-WS2
PVB /ethanol
2
IF-WS2 nanoparticles
PVB/ethanol/ IF-WS2
PVB /2-propanol
3
IF-WS2 nanoparticles
PVB/2-propanol/ IF-WS2
1

Released
energy,
J/g
2.78
1.77
19.96
2.72
1.77
18.21
3.36
1.77
15.73

Compatibility
coefficient,
D

Picture 3. SEM image of WS2/PVB B60H/EtOH

6.13

8.11

8.77

Comparing the theoretical and experimental curves of


heat flow, the value of D for IF WS2 in mixture with pure
PVB is closer to the value of D for PVB dissolved in
ethanol, than to the value of D for 2-propanol. It also may
be observed that the values of released energy are very
similar for PVB powder and for PVB dissolved in ethanol
while there is a significant difference in released energy
for PVB dissolved in 2-propanol. Based on these
observations, and on the results obtained by DSC, it may
be concluded that more appropriate solvent is ethanol.
Thus, further on, this solvent is used for experimental
work nanoparticles were incorporated in the system

Picture 4. SEM/EDS analysis of WS2/PVB B60H/EtOH


EDS element mapping of the same structures showed the
spatial distributions of W and S in thin film samples.
These maps confirm good nanoparticles dispersion
(Picture 5).

Picture 5. SEM/EDS maps of a sample PVB B60H/EtOH/1wt.%WS2: W and S element maps


621

PRELIMINARYANALYSISOFTHEPOSSIBILITYOFPREPARINGPVB/IFWS2COMPOSITES.EFFECTOFNANOPARTICLESADDITION

OTEH2016

modulus curves reach higher peak values for higher


content of nanoparticles in the composites.

3.5. DMTA results


The results of the DMTA strain amplitude sweep test,
performed at the room temperature (~20C) for the
analyzed samples are shown in diagrams in Picture 6.
Strain sweep tests are mostly carried out for the the sole
purpose of determining the limit of the Linear
Viscoelastic (LVE) range. As long as the strain
amplitudes are still below the limiting value, c, the
curves of G' and G'' are remaining on a constant value, i.e.
the structure of the tested sample shows no significant
change at these low deformations. When measuring in the
LVE range, practical users speak of ''non-destructive
testing''. Here, the elastic behavior dominates the viscous
one. Also, G' value increases with increasing WS2
content, regardless of strain amplitude level. The highest
values are observed for 2 wt.% of WS2 (Picture 6). The G'
value is a measure of a deformation energy stored by the
polymer during the shear process, showing completely
reversible deformation behavior. The increase of the G'
value is connected with a reduction of polymer chain
mobility, and it may be caused by the increasing the
degree of cross-linking or, like here, by binder-filler
interactions [14-16]. The same trend could be observed
for G'': increasing G'' value indicates an increasing portion
of deformation energy which is used up already before the
final breakdown of the internal structure occurs. This may
occur due to relative motion between the molecules,
mobile single particles, agglomerates or structures which
are not linked or otherwise fixed in the network. The loss
modulus represents the deformation energy which is
dissipated due to inner friction processes.

Picture 7. Temperature dependences of storage modulus


and loss modulus for tested samples
The nanoparticles also caused a mild narrowing of the
loss modulus peak which was attributed to the relaxation
process within the composites (Picture 7). The ratio
between the loss modulus and the storage modulus, called
the loss factor, or tan(), has also higher values for the
samples with nanoparticles (Picture 8 and Table 4).
Higher glass transition temperatures are observed for the
samples containing IF-WS2 nanoparticles, so it was
shown that there is an improvement of thermal properties
of this kind of composite materials due to the nanofiller.

Picture 8. Temperature dependences of loss factor tan()


Table 4 . Tg of the tested samples
Sample
PVB B60H/EtOH
PVB B60H/EtOH/1 wt.% WS2
PVB B60H/EtOH/2 wt.% WS2

Picture 6. Amplitude sweep test: storage modulus and


loss modulus dependence on shear strain
Pictures 7 and 8 show DMTA data obtained in the
temperature ramp tests.

tan()max
0.464
0.466
0.407

Tg[C]
64.01
65.78
67.35

A molecular interpretation of the viscoelastic behavior can


be given considering the tan(), which describes molecular
rearrangement regions, corresponding to the polymer
fractions with different mobility. The intensity of Tg,
tan()max, is influenced by the segmental motion of the
polymer chains. The lower the mobility restrictions on the
polymer chains are, the higher the tan()max values. Although
the tan()max values remain almost constant on addition of
1 wt.% of the nanoparticles, 2 wt.% nanoparticle content
decrease this value. Decrease of the tg()max value is a
consequence of more rigid polymer structure, so one can

In glassy state, with temperature increase, the G' values


remain almost constant, while G'' values slightly increase.
Further temperature increase causes decreasing of both
moduli. This is a typical behavior of this type of polymer
in transition region. The reason for slow decrease of the
storage modulus could be crystallinity within the polymer
structure of tested samples. These curves are shifted to
higher positions with higher content of nanoparticles in
the composites. Increase of G' values is a consequence of
the easier mobility of the polymer chains. The loss
622

PRELIMINARYANALYSISOFTHEPOSSIBILITYOFPREPARINGPVB/IFWS2COMPOSITES.EFFECTOFNANOPARTICLESADDITION

interpret that a higher content of nanoparticles leads to


reinforcing effect within the tested polymer. This could also
indicate that a significant number of chain segments of
polymer in this composite sample (2 wt.% of IF-WS2)
participate in this glass transition.

OTEH2016

325701-325711
[3] Tevet,O.: Mechanical and tribological properties of
inorganic fullerene-like (IF) nanoparticles, Weizmann
Institute of science, Rehovot, Israel, 2011.
[4] Tevet,O., Von-Huth,P., Popovitz-Biro,R., Rosentsveig,R.,
Wagner,H.D., Tenne,R.: Friction mechanism of
individual multilayered nanoparticles, Proc. Natl. Acad.
Sci. U.S.A., 108 (2011), 19901-19906.
[5] Xu,F.: Large Scale Manufacturing of IF-WS2
Nanomaterials and Their Application in Polymer
Nanocomposites, University of Exeter, Devon, UK,
2013
(https://ore.exeter.ac.uk/repository/handle/10871/898
6, last date of access 2nd January 2016).
[6] Naffakh,M., Dez-Pascual,A.M., Marco,C., Ellis,G.:
Morphology and thermal properties of novel
poly(phenylene sulfide) hybrid nanocomposites
based on single-walled carbon nanotubes and
inorganic fullerene-like IF-WS2 nanoparticles, J.
Mater. Chem. 22 (2012) 1418-1425.
[7] HuLin Li, ZhongWei Yin, Dan Jiang, YaJun Huo,
YuQing Cui: Tribological behavior of hybrid
PTFE/Kevlar fabric composites with nano-Si3N4
and submicron size WS2 fillers, Tribol. Int. 80
(2014) 172178.
[8] Zhu,Y.Q., Sekine,T., Kieren,B.S., Firth,S., Tenne,R.,
Rosentsveig,R., Kroto,H.W., Walton,D.R.: ShockWave Resistance of WS2 Nanotubes, J. Am. Chem.
Soc. 125 (2003) 13291330.
[9] http://www.kuraray.eu/en/produkte/productgroups/polyvinyl-butyral/ (date of last access 12th
May 2016)
[10] Ismail,I.N, Ishak,Z.A.M., Jaafar,M.F., Omar,S.,
Zainal Abidin,M.F., Ahmad Marzuki,F.: Thermomechanical properties of toughened phenolic resol
resin, Solid State Science and Technology, Vol. 17,
No 1 (2009) 155-165.
[11] Folgar,F., Scott,B.R., Walsh,S.M., Wolbert,J.:
Thermoplastic matrix combat helmet with graphiteepoxy skin, 23rd International symposium on
ballistics, Tarragona, Spain 16-20 april 2007, p.883.
[12] Anton Paar Germany GmbH, MCR: The Modular
Compact Rheometer Series, 2016.
[13] STANAG 4147 (Edition 2), Chemical Compability of
Ammunition Componentes with Explosives (NonNuclear Application), June 2001.
[14] Mezger,G.T.: The Rheology Handbook, 4th Edition,
Vincentz Network, Hanover, 2014.
[15] Fuente,De La J.L., Garcia,M.F., Cerrada,M.L.:
Viscoelastic Behavior in a Hydroxyl-Terminated
Polybutadiene Gum and Its Highly Filled
Composites: Effect of the Type of Filler on the
Relaxation Processes. J. Appl. Polym. Sc., 2003,
Vol.88, pp. 17051712.
[16] Bohn,M.A.: Impacts on the Loss Factor Curve and
Quantification of Molecular Rearrangement Regions
from it in Elastomer Bonded Energetic Formulations,
Energetics Science & echnology in Central Europe,
University of Maryland, 2012, pp.195-235.

4. CONCLUSION
The possibility is examined to prepare poly(vinyl
butyral)/tungsten disulfide nanocomposite of enhanced
thermal and visco-elastic properties, i.e. to reinforce PVB
by adding a small quantity of IF-WS2 nanoparticles (1 or
2 wt.%). It was first examined how two solvents, ethanol
and 2-propanol, affect thermo-mechanical behavior of
different grades of PVB: Mowital B60H and B75H.
Thermal analysis on DSC at all the three different heating
rates for PVB grades B60H and B75H, resulted in more
uniform Tg values for the second heating and higher Tg
values are obtained in 2nd heating. For the first heating, at
the heating rate of 20 C/min, the highest Tg is recorded
for both solvents, for both PVB grades. PVB dissolved in
ethanol has closer Tg values taken from the first and
second heating, for all the three heating rates, indicating
that more reproducible and reliable results are obtained
for this solvent. The curves of heat flow obtained by
microcalorimetry method resulted in value of coefficient
D for IF WS2 in mixture with pure PVB closer to the
value for PVB dissolved in ethanol, than to the value of D
for 2-propanol. Released energies are very similar for
PVB powder and for PVB dissolved in ethanol while
there is a significant difference in released energy for
PVB dissolved in 2-propanol, so ethanol is more
appropriate solvent. Nanoparticles were incorporated in
the system PVB/ethanol by solvent casting technique.
SEM analysis showed good dispersion of IF-WS2
particles and certain degree of remained agglomeration of
the filler. EDS analysis and element mapping has showed
good spatial distribution of IF-WS2. The effect of addition
of IF-WS2 nanoparticles on viscoelastic properties was
examined using DMTA. The storage modulus and Tg are
higher for the samples with nanoparticles due to the
reinforcement effect of the particles.
The obtained results justify the usage of poly(vinyl
butyral)/tungsten disulfide nanocomposite as a material of
improved mechanical properties, that might find further
application in many areas.

ACKNOWLEDGEMENT
The authors thank to the Ministry of Education, Science
and Technological Development of Republic of Serbia for
the financial support of the research through the projects
TR 34011 and TR 34034.

References
[1] Tenne,R.,
Margulis,L.,
Genut,M.,
Hodes,G.:
Polyhedral and Cylindrical Structures of Tungsten
Disulfide, Nature, 360 (1992) 444-446.
[2] Xu,F., Yan,C., Shyng,Y., Chang,H., Xia,Y., Zhu,Y.:
Ultra-toughened
nylon
12
nanocomposites
reinforced with IF-WS2, Nanotechnology 25 (2014)
623

HIGH PERFORMANCE LIQUID CHROMATOGRAPHY


DETERMINATION OF 2,4,6-TRINITROTOLUENE IN WATER SOLUTION
JOVICA NEI
Military Technical Institute, Belgrade, SERBIA
LJILJANA JELISAVAC
Military Technical Institute, Belgrade, SERBIA
ALEKSANDAR MARINKOVI
Faculty of Technology and Metallurgy, Belgrade, Serbia
SLAVIA STOJILJKOVI
TRI ore Dimitrijevi ura, Kragujevac, SERBIA

Abstract: During production of explosives and demilitarization of ordnance, there are waste water and residual explosive
materials, and their degradation products, which represent a significant ecological and toxicological problem. This article
describes the first step of this kind of research in MTI. EPA 8330 method is applied for determination of the concentration of
the of 2,4,6-TNT in aqueous solutions prepared in laboratory. By using the method of High Performance Liquid
Chromatography with PDA Detection, optimum operating conditions for separating of 14 reference explosives components
of EPA 8330 mixture, have been defined and multi- point calibration was successfully performed. The concentrations of
2,4,6-TNT in the prepared aqueous solutions, before and after adsorption on two adsorbents, were determined by mentioned
HPLC method.
Keywords: 2,4,6-Trinitrotoluen, EPA 8330 method, high performance liquid chromatography.

carcinogenic, mutagenic, and toxic effects. This compounds


are highly toxic to the environment and, therefore, require
monitoring by many regulatory bodies all over the world.

1. INTRODUCTION
Requirements for monitoring of processes of energetic
materials production and the demilitarization of ordnance
are increasing. The main problem is the explosive retained
in the process, waste and ground water. The explosive 2,4,6trinitrotoluene (2,4,6-TNT) and derivatives resulting from
photocatalytic degradation represent a great danger to
human health and environment [1-3].

Therefore, the U.S. Environmental Protection Agency


(EPA), today requires complete characterization of 14
primary explosives, which can be found in the surrounding
soil, ground water and rocks.
A variety of chromatographic techniques have been applied
to separate and detect explosives compounds, including gas
chromatography (GC), thin layer chromatography (TLC),
supercritical fluid chromatography (SFC), capillary electro
chromatography (CEC), and high-performance liquid
chromatography (HPLC) [4]. Among these, HPLC with
ultraviolet detection is the preferred method for routine
analysis because of its ease of use, high reliability, ability to
detect unstable and nonvolatile nitroaromatic and nitramine
compounds with good sensitivity [5].

Examination of new sorption materials for the removal of


organic contaminants from waste energetic materials is the
subject of research around the world. Choice of the suitable
methods for monitoring the concentration of explosives in
the water was the first phase of this kind research in MTI.
The characterization of different types of adsorbent
materials for their removal from the water is the next phase
of researching. The results of study will be a starting point
for future research in the area of methods development for
continuous monitoring of the concentration of energetic
materials as pollutants in water and reducing of the
pollution of waste and process water.

EPA Method 8330 is the most comprehensive and widely


used HPLC method for environmental monitoring. The
method is intended for the trace analysis of a collection of
14 explosives residues in water, soil, and sediment
(structures shown in Figure 1).

Explosive residues such as 2,4 6-TNT and associated


nitroamine impurities present health concerns due to their

624

HIGHPERFORMANCELIQUIDCHROMATOGRAPHYDETERMINATIONOF2,4,6TRINITROTOLUENEINWATERSOLUTION

OTEH2016

Picture 1. The structures of 14 explosives listed in U.S. EPA Method 8330 [4]
Method recommends methanolwater as a mobile phase and
requires two columns for this analysis. The main difficulty
with U.S. EPA Method 8330 is the coelution of
dinitrotoluene (DNT) and amino-dinitrotoluene (AmDNT)
isomers on the primary C18 column [6]. Consequently, an
additional HPLC run must be performed on a cyano (CN)
column, reducing throughput.
Acclaim Explosives (E1 and E2) columns are high
efficiency silica-based columns for explosives analysis that
provide baseline resolution of 14 target explosives listed in
U.S. EPA Method 8330, but with different selectivities. The
E1 column is an effective direct replacement of the current
primary column (C18), while the E2 column is a good
alternative that can be used alone and also serve as a
confirmatory column for explosives analysis [6].

Picture 2. High Performance Liquid Chromatograph with


PDA detector

2. EXPERIMENTAL

Reagents and Standards

Equipment

Deionized (DI) water

For separation, qualitative and quantitative determination of


explosives, "Waters 1525 EF Binary HPLC Pump" instrument
was used. It consisted of: a column heater, a injector "Rheodine
Model 7125", "Waters 2998 photodiode array detector (PDA)"
and Waters vacuum degasser, Picture 2.

625

Methanol (CH3OH), 99.9 %, HPLC (Sigma Aldrich)


Method 8330-Mixture of Explosives HPLC standards,
consists of 14 explosives and related substances
(AccuStandard 8330-R): 2-Am-DNT;1,3-DNB;2,4DNT; HMX; NB; RDX; 1,3,5-TNB; 2,4,6-TNT;4-AmDNT; 2,6-DNT; 2-NT; 3-NT; 4-NT; and tetryl, 1000
g/mL each, in MeOH:AcCN (1:1).

OTEH2016

HIGHPERFORMANCELIQUIDCHROMATOGRAPHYDETERMINATIONOF2,4,6TRINITROTOLUENEINWATERSOLUTION

2.1. Separation of 14 Explosives on the Acclaim


Explosives E2 Column

1.5 mL vial. The concentration of each explosive in the


Stock Standard Mix will be 100 g/mL.

Operational conditions for separation of 14 Explosives on


the Acclaim Explosives E2 Column by EPA 8330 Method
are shown in Table 1.

Working Standard Solutions for Calibration


For calibration, four working standard solutions with
different concentrations (50 g/mL, 40 g/mL, 30 g/mL
and 10 g/mL) were prepared by diluting the proper amount
of Stock Standard Mix or some of above mentioned
standard solutions with methanol.

Table1. Optimal operational conditions of HPLC method


Column
Acclaim Explosives 2, Dionex (silica
based reversed phase) with internal
diameter 4,6 mm; length 250 mm; the
particle size of package, 5m
Precolumn
Acclaim Explosives 2, Dionex with
internal diameter 4,3 mm; length 10 mm;
the particle size of package, 5m
Mobile phase methanol / deionized water = 48/52 (v/v)
Flow rate
0.7 mL/min
PDA detection
254 nm
Oven
was adjusted in the range 27 to 30 oC
temperature
until optimal resolution was achieved
(mobile phase composition was kept
constant)

Preparation of samples of aqueous solution of 2,4,6-TNT


In volumetric flask of 500 mL volume, 150 mg 2,4,6-TNT
at a temperature of 20oC, was weighed. Deionized water is
then added in volumetric flask up to the line. The maximum
solubility of TNT in water is about 130 mg / mL at
temperature 20oC.
Then, mass concentration of 2,4,6-TNT in aqueous solution
before and after the adsorption on two adsorbents, are
measured by the method EPA 8330.

3. RESULTS AND DISCUSSION

2.2. Determination of the concentration of 2,4,6-TNT


in a prepared aqueous solution by HPLC method
EPA 8330

3.1. Separation of EPA 8330 mixture of 14 explosives


on the Acclaim Explosives E2 Column
HPLC separation of EPA 8330 mixture of 14 explosives at
operational conditions (Table 1) were performed.
Temperature was adjusted in the range 27 to 30oC until
optimal resolution was achieved (Pictures 3 to 5) and
mobile phase phase composition was kept constant.

Preparation of calibration solution and standards


Stock Standard Mix
Mix 100 L of the EPA 8330-Explosives mixtures of HPLC
standards (1000 g/mL each) and 900 mL of methanol in a

0.45

2 2 .7 2 8

1 8 .0 0 5

0.50

3 0 .9 8 3

0.35

3 8 .3 1 9

2 4 .6 0 3

0.40

0.30

5 4 .2 1 1

50.00

55.00

4 0 .4 1 5

3 4 .1 4 6

2 0 .2 7 9
2 1 .9 7 4

1 2 .5 2 2
1 3 .7 2 6
1 5 .0 1 7

0.05

1 .7 8 5
23 .. 694 839 451
3 .9 4 7
44 .. 84 87 18
56 .. 72 45 96
67 .. 86 00 36
8 .8 6 2

0.10

4 9 .2 3 6
5 1 .1 6 1

4 5 .6 9 9

4 2 .0 3 8

3 6 .8 1 1

1 7 .0 5 3

1 0 .2 9 9

0.20

0.15

3 2 .3 6 2

AU

0.25

0.00
0.00

5.00

10.00

15.00

20.00

25.00

30.00
35.00
Minutes

40.00

45.00

60.00

65.00

Picture 3. HPLC chromatogram of EPA 8330 mixture of 14 explosives at 27

626

OTEH2016

HIGHPERFORMANCELIQUIDCHROMATOGRAPHYDETERMINATIONOF2,4,6TRINITROTOLUENEINWATERSOLUTION

11
12

The analysis of the chromatogram showed good separation


of all 14 peaks on the base line at 27oC. Retention time of
14 separated Explosives on the Acclaim Explosives E2
Column by EPA 8330 Method are shown in Table 2.

13
14

Table 2. Separeted and identified explosives in EPA 8330


mixture at optimal operating conditions
peak
Expolosives
RT, min
1
Octogen
HMX
10.299
2
Hexogen
RDX
17.053
3 1,3,5-Trinitrobenzene 1,3,5- TNB
18.005
4
1,3-Dinitrobenzene
1,3-DNB
22.728
5
Nitrobenzene
NB
24.603
6
2,4,6-Trinitrotoluen
2,4,6-TNT
30.983
7
Tetryl
Tetryl
32.362
8
2,6- Dinitrotoluen
2,6- DNT
36.811
9
2,4- Dinitrotoluen
2,4- DNT
38.319
10
2 Nitrotoluen
2-NT
42.038

4- Nitrotoluen
3- Nitrotoluen
4-Amino-2,6Dinitrotoluen
2-Amino-4,6Dinitrotoluen

3-NT
4-NT

45.699
49.236

4-Am-2,6-DNT

51.161

2-Am-4,6-DNT

54.211

Analyzing the chromatograms at a temperature of 30oC, it


was observed overlapping of peaks 6 and 7 (2,4,6-TNT and
Tetryl). Also, due to coeluation, inefficient or poor
separation of the two components of the column, their peaks
13 and 14, overlap completely. Result is a complete failure
of one of the 14 peaks of the chromatogram (Picture 4).
The analysis of the chromatogram at 28C shows improving of
separation of peaks 6 and 7, and separation of all 14 of the
peaks, too. The peaks 12 and 13 are not separated to base line.
We can conclude that optimal resolution was achieved at
temperature 27 (Picture 3).

Picture 4. HPLC chromatogram of EPA 8330 mixture of 14 explosives at 30

Picture 5. HPLC chromatogram of EPA 8330 mixture of 14 explosives at 28


627

HIGHPERFORMANCELIQUIDCHROMATOGRAPHYDETERMINATIONOF2,4,6TRINITROTOLUENEINWATERSOLUTION

3.2. Determination of the concentration of TNT in a


prepared aqueous solution by HPLC method EPA
8330

OTEH2016

Determination of TNT concentration in the aqueous


solution by HPLC method EPA 8330
Chromatogram of the sample of aqueous solution of 2,4,6TNT after adsorption on adsorbent B using HPLC EPA
8330 method and optimal operating conditions (Table 1)
and 27oC is shown in Picture 7.

Multi-point calibration by EPA 8330 method


Calibration solutions of standard of concentration: 10, 30,
40 and 50 mg/mL were analyzed by HPLC under the
optimum operating conditions (Table 1) at 27oC.
Picture 6. shows the resulting calibration curve through 4
points for 2,4,6-TNT. Correlation coefficients of calibration
curves 2,4,6-TNT is 0.9989.

Picture 7. HPLC chromatogram of the aqueous solution


of 2,4,6-TNT after adsorption by EPA 8330
Final results of the HPLC measurements of mass
concentration of TNT in a prepared aqueous solution before
and after adsorption on adsorbents A and B are shown in
Table 3.
Table3.Quantitative analysis of aqueous solution of 2,4,6TNT by EPA 8330
Concentration of 2,4,6-TNT
g/mL
in aqueous solution
- before adsorption
51,04
- after adsorption on adsorbent A
35,27
- after adsorption on adsorbent B
33,60

Picture 6. Results of multi-point calibration procedure for


2,4,6-TNT
Correlation coefficients of calibration curves for each of the
14 components of the mixture EPA 8330, are presents:

4. CONCLUSION

1. HMX 0.982380;
2. RDX 0.998573;
3. 1,3,5-TNB 0.998927;
4. 1,3-DNB 0.9989;
5. NB 0.9996;
6. 2,4,6-TNT 0.9989;
7. Tetryl 0.9988;
8. 2,6-DNT 0.9989;
9. 2,4-DNT 0.9988;
10. 2-NT 0.9989;
11. 3-NT 0.9996;
12. 4-NT 0.9992;
13. 4-Amino-2,6-DNT 0.9982;
14. 2-Amino-4,6-DNT 0.9983.

By using the method of High Performance Liquid


Chromatography with PDA Detection, optimum operating
conditions for separating of 14 reference explosives
components of EPA 8330 mixture, have been defined and
multi-point calibration was successfully performed. The
concentrations of 2,4,6-TNT in the prepared aqueous
solutions, before and after adsorption on two adsorbents,
were successfully performed. Choice of the suitable
methods for monitoring the concentration of explosives in
the water was the first phase of this kind of research in MTI.
References
[1] Anelkovi,M. Luki: Refining of waste waters in
processes of manufacturing and coating of high
explosives, Military Technical Courier, 50 (2), (2002),
202-207.
[2] Marinovi,V.: Examining the possibilities of removal of
nitroaromatics from waste waters by dynamic

Values of correlation coefficient of calibration procedure


have the value ~ 1. So, the method can also be used under
the same operating conditions to analyze any of those
explosives present in the water solution.
628

HIGHPERFORMANCELIQUIDCHROMATOGRAPHYDETERMINATIONOF2,4,6TRINITROTOLUENEINWATERSOLUTION

adsorption, master thesis, TMF, Belgrade, in 1989.


[3] Geoffrey I. Sunahara: Ecotoxicology of explosives,
Boca Raton, 2009
[4] Xiaidong,L., Bordunov,A., Tracy,M., Pohl,C.: A total
Solution to Baseline Separation of 14 Explosives in
U.S EPA method 8330, Reprinted from American

OTEH2016

Laboratory News , January 2007.


[5] Jing Xu Qun C. and Rohrer J.: Sensitive Determination
of Explosive Compounds in Water, Application Note
1066, 2014.
[6] DIONEX, Product Manual, Acclaim Explosives, ,
2006.

629

MEASURING CLEANING CLASS OF OIL AFTER


TRIBOLOGICAL TESTING
RADOMIR JANJI
Technical Test Center, Belgrade, lari32@mts.rs
SLOBODAN MITROVI
Faculty of Engineering University of Kragujevac, Kragujevac, boban@kg.ac.rs
DRAGAN DUNI
Faculty of Engineering University of Kragujevac, Kragujevac, dzuna@kg.ac.rs
IVAN MAUI
Faculty of Engineering University of Kragujevac, Kragujevac, ivanm@kg.ac.rs
BLAA STOJANOVI
Faculty of Engineering University of Kragujevac, Kragujevac, blaza@kg.ac.rs
MILAN BUKVI
Ph.D student at Faculty of Engineering University of Kragujevac, Kragujevac, milanbukvic76@gmail.com
ZORAN ILI
Technical Test Center, Belgrade, zoranilic_65@yahoo.com

Abstract: A lot of literature is devoted to the identification of wear particles obtained in order to monitor the status of
technical systems and the analysis of wear. The biggest influence on the process of wear have solid particles having a size
equal to the size of the gap, but have little impact particles whose size is three times smaller than the size of the gap. Today,
various physicochemical methods and tribological methods are using for diagnosing tribomechanical system. The paper
presents analysis of oil samples containing particles resulting from wear after tribological tests on a sample of lead tin
bronze CuPb22Sn1,5 and homogeneous material alloy ZA27 sliding speed V = 1 m / s and three different values of normal
force Fn = 10 dN, Fn = 15 dN i Fn = 20 dN. Mobile instrument for on-site oil analysis have been used in the tests.
Keywords: particles, contamination, friction, wear, standard.
contaminants significantly affect the operation tribomechanical system in which the present mechanism of contactless sealing.

1. INTRODUCTION
Tribological problems in the moving parts of the technical
system are the main causes of failure, so the special
attention is paid to reducing friction and wear and
lubrication in these systems. Due to the tribological
processes in the contact zones tribomechanical system
creates gaps greater than allowed. In case of increased gap
in tribomechanical system problems arise reaching the
working pressure, rapid warming, an increase in noise level,
rapid wear of structural components, and a range of other
phenomena. The choice of materials, lubrication and
tribology properly construct can be an effective tool for
reducing friction, wear the tribomechanical systems.

Contamination of the working fluid by solid particles


ranging in size from a few nanometers to a few micrometers
is responsible for increasing the wear and catastrophic
failures tribomechanical system. Wear, which can cause
contamination such as abrasion, surface cutting, flaking,
fatigue, and even scraping depends on the operational
conditions and the mechanical properties of particles.
Contamination of the lubricating oil in the broadest sense,
encompasses all the processes that lead to changes in its
characteristics, and structural and functional degradation.
Contamination of the lubricating oil leads to a minor or
major damage and dysfunction of the technical system of
which it is an integral part, since oil, performing his
function, comes in contact with all components of the
system.

The reliability of the moving parts of the technical system


significantly affects the purity of the working fluid
contaminated water and destructive solid particles of
different origin, composition, size and shape. Elevated
levels of wear and tear cause a significant increase in the
concentration and size of particles worn. Solid particles of
630

OTEH2016

MEASURINGCLEANINGCLASSOFOILAFTERTRIBOLOGICALTESTING

elements and the reduction of friction and wear


tribomechanical system can give us important information
about the state of the technical system. Lubricating oil is an
important element of many technical systems over an
extended period of time performs various important
functions. Analysis of lubricating oil includes a large
number of tests whose primary objective is to determine the
ability of an oil to carry out functions which it is intended.
These tests also provide us with important information on
the state of the technical system and all its components that
come into contact with lubricating oil [2].

2. NEGATIVE INFLUENCE OF THE


CONTAMINANTS
Mechanical impurities and water present in primary form,
external contaminants introduced into the tribomechanically
system from the environment. Products of chemical
reactions have been generated within the system.
Mechanical impurities are the most common source of
contamination of oil and basic cause of the process of wear
of contact surfaces in the system which lubricates.
The primary form of mechanical impurities penetrate into
the technical system from the environment, causing
contamination of the oil, stimulate the tribological processes
in the system and degradation of the lubricating oil resulting
in secondary contamination (products of wear process in the
system).

The oil in the lubrication system always contains some level


of particulate contamination. The oil can be contaminated
even get started tribomechanical system. Bearings with the
sliding elements are especially susceptible to damage caused
by the particles contained in the oil. This is because the
skate on smooth surfaces and require a thin oil film
separating in order to function properly. Worn particles are
usually larger than the thickness of the oil film, so when
they are caught in contact deplete the touchpad. This leads
to the initiation of cracks which later lead to fatigue or to
abrasive wear [3].

Solid contaminants can occur during manufacturing and


assembly process can be generated by wear, enter from the
outside, introduced during the maintenance and repair of
technical systems. In all contacts between the mechanical
elements is constantly a large number of contaminants,
which reduce the service life of the analyzed components,
mechanisms and machines.

Analysis of the lubricating oil is a set of well-known and


widely used procedures for many years routinely used in the
context of all known concepts maintenance of technical
systems, as well as completely new, recently developed
technologies.

The mechanism of action of solid particles on the wear


behavior, performance and service life of components is
multiple and complex. The particles in the fluid flow
through components tribomechanical system, a part of them
comes to the surface in contact and accelerates the process
of wear and tear.

3. SYSTEM DEFINITION BY CLASS OF


PURITY STANDARD

All cases of operation the particles can be reduced to


elemental contact base area, which is stationary, rotating or
moving translationally with solid particles carried by the
fluid. On that occasion, give rise to different wear processes,
most often abrasive, and less erosive wear.

Due to contamination of the working fluid must be given


special attention, it is very important to know and
understand the standards and interpretation of the content of
solid particles in the working fluid. There are a number of
different tests in the analysis of the oil used in the
assessment of his condition. The test should cover the area
of interest that is the impurities in the lubricant. For the
assessment of oil contamination by solid particles in
accordance with ISO 4406 and NAS 1638 (National
Aerospace Standard) standards used electronic particle
counters. The generally accepted and commonly used
method to determine the level of contamination of the
working fluid by solid particles is defined by the ISO 4406
[3, 4].

Large particles (up to a millimeter) have been often included


in the mechanical system due to ineffective sealing between
the mechanical system and the external environment, so that
in this way the system were entering the air, dust, sand,
metal shavings, etc.
The consequences of increased oil contamination can cause,
if present [1]:
coarse particles (>15 m) - surprised outbursts
component,
fine dirt (5-15 m) wear and damage to
components,
fine dirt (<2-5 m) - sediment accumulation of
soluble and oil-soluble, rapid aging oil,
water in oil - corrosion, increased wear and damage
to components, accelerated aging oil.

As a result of many similar studies, ISO has developed a


universal standard for measuring and marking the level of
contamination in the fluid known as ISO cleanliness codes,
such as in 4406: 1999. Measurement of the number and size
of particles in a sample fluid, followed by the addition of
ISO code from the table. The code is then compared with a
set code for a given mechanical system, which is determined
based on the allowable degree of wear and optimal working
life [3, 4].

Oils and lubricants often contain such contaminants that


have been either generated within the tribomechanical
system or entered from the external environment.
Contaminant particles may be entered in the tribological
contact between the elements in contact and damage the
mating surfaces of the elements within them. A large
number of such individual defects can cause great damage
to the elements and the technical system as a whole.

Contamination of the working fluid by solid particles has


been determined by assigning a corresponding code number
according to a defined standard. It should be noted that
different standards have different code numbers for the
same class of cleanliness of the working fluid.
Classification of solid particles by the earlier ISO 4406

Lubricating oil in addition to the basic role of the separation


631

OTEH2016

MEASURINGCLEANINGCLASSOFOILAFTERTRIBOLOGICALTESTING

occurred within the 24 code groups of 1 to 24, according to


a new standard from the year 1999 (the same code 4406)
carried out in 29 groups of 0 to 28. The previous standard
counts the number of solid particle size of 5 and 15
micrometres (optionally subsequently introduced and the
size of 2 microns) in 100 ml of the sample, while new in 1
ml oil samples counted solid particles size 4, 6 and 14
micrometres (c) measured in three dimensions (Picture 1
and 2) [3, 4,5].

When checking oil condition in practice it is necessary to


respect the applicable technical standards and measurement
methods in the field of hydraulic systems and the state of
lubrication system. Diagnostic equipment should be in
accordance with applicable standards, tested both in
laboratory and in field conditions and suitable for the
purpose. Measured values are kepting at checkpoints,
processed and displayed.
There are several tests that have been developed for the
purpose of conducting the analysis of particulate
contamination of the oil. To perform these tests have been
well developed and they are still being intensively
developed instruments and equipment, in the laboratory, as
well as those for the implementation of on-site analysis.

Code numbers

Long is the analysis of lubricating oil has been based solely


on the use of specialized laboratory services. The
appearance of the first mobile instrument for on-site
analysis of lubricating oil was a significant progress that has
brought many benefits such as:
analysis in real time,
the current results of the procedures carried out oil
analysis,
possibility of unlimited repeatability of measurements,
if necessary,
measurement and analysis conducted by people who
know the equipment and machinery,
a significant reduction in costs related to laboratory
services, sampling and shipment of samples,
reducing the opportunities for the emergence of errors
and contamination of samples,
the ability to control the input oil and,
significantly shorter time between the implementation
of contamination control and proper maintenance
actions.

particles
particles
particles

optional

Picture 1. Class of cleanliness according to the old ISO


standard

Code numbers

particles

This method of oil analysis is a fast, efficient and costeffective with high sensitivity and thanks to the benefits that
owns represents a powerful predictor of cancellation of
technical systems.

particles
particles

4. EXPERIMENT

Picture 2. Class of cleanliness according to the new ISO


standard

Based on tests of tribological properties of lead tin bronze


CuPb22Sn1,5 and alloys ZA27 the speed slading V=1 m/s
and three different values of normal force Fn=10 dN, Fn=15
dN i Fn=20 dN, came to the conclusion that they should
check the presence of these particles in the contaminated oil.

When defining the number of solid particles in the oil


sample should be borne in mind that it is still calculating the
total number of particles larger than the size indicated.
This means that the number of solid particles in the size [5]:
4 m, inclusive of all solid particles greater than and
equal to 4 m, and the particles of a size 6 i 14 m;
6 m, inclusive of all solid particles greater than and
equal to 6 m, and the particles of a size 14 m;
14 m, inclusive of all solid particles greater than and
equal 14 m.

Detection of particles in the oil was carried out at the


Faculty of Mechanical Engineering in Kragujevac.
Equipment that is used consists of: sensors, measuring
devices, peristaltic pump, systems for data acquisition in the
computer, software and computer equipment for data
storage. The equipment is easy to operate in real time to
obtain reliable and accurate information about the level of
cleanliness of the working fluid according to standard ISO
4406:1999.

The number of particles can be determined by a laser or the


optical method. Laser method gives the amount, size and
distribution of the particles, whereas the optical and method
provides identification. Use a combination of both methods.
The results of determining the amount of the particles is
usually expressed by the ISO scale of purity.

The measuring system is shown in Picture 3. This


measuring system enables measuring the amount of
particulate matter, temperature and relative humidity.

632

OTEH2016

MEASURINGCLEANINGCLASSOFOILAFTERTRIBOLOGICALTESTING

The measured values are collected and processed by using


software that was created in Windows applications.
The software allows the creation of new data has been
performed on the basis of the measured value. Software
quality reflects in the review work environment that
provides numerical and graphical representation of the
signal.
Undissolved air in the fluid mass presents in the form of
bubbles, and it needs to be forced out of the fluid in the
ultrasonic tubs. In the absence of ultrasonic tub bubbles
gently squeeze the circular motion of the sample. Thus,
small particles by this method have been evenly distributed
in the sample oils creating a homogeneous environments.
Improperly prepared samples can easily ruin the previously
carefully prepared conditions for testing.
Collecting oil sample has been carried out in safe conditions
and in a manner that does not introduce dust and other
impurities in the sample. Pump for sample extraction of oil
from the vessel after a tribological test was using properly in
order to avoid further contamination of the oil sample, and
each sample was labeling before testing.
The amount of sample was diluting with hydraulic oil up to
200 ml and allowed to flow through a small capillary tube
that moves through the field of sensors. The laser sensor
measures the size and quantity of particles in the
contaminated oil every 10 sec.

Picture 3. Measuring system


Analysis equipment today is standard equipment for oil
analysis laboratories, which provides information on the
state relatively quickly and accurately.

During the experiment data measured sizes displayed on the


monitor screen and saved on computer disk.

The device for determining the degree of purity of the oil


contains:
sensor part (located in the device),
source of laser diffraction of rays those passing
between the working fluid stream (particles passing by
detectors, are covering it generates a voltage
proportional to the size / number of particles),
section for signal processing and
part to show results.

This application allows:


collecting device data continuously over time,
display (numerically and graphically) the value of the
measured values in real-time,
creating data files measured values stored on the disk
of a computer and
display of measurement results.
Measuring the amount of particles carried in the device via
laser sensors every 10 seconds per 32 signal after each
sample.

The device is a high performance, designed to provide


precise measurement results. The device counts particles
with high resolution in four particle size channels, available
via the interface data for the report in real time on PC, purity
according to standard ISO 4406:1999 for particle size
classes > 4 m, > 6 m, > 14 i > 24 m that are displayed
on the screen of the instrument. (Picture 4).

There have been measured only basic parameters: the


amount and particle size, temperature and relative humidity.
The device detects small and large particles up to 24
microns. These coarse particles of 24 microns are
particularly important because they represent the first
indicators of increased wear intensity.

In order to lead particles in the test device a peristaltic pump


Seko PR7 (3.5 W) is used. The peristaltic pump has been
designed to transfer the contaminated oil without additional
contamination. The working principle of the peristaltic
pump is to be inside a circular shield pump is a rubber hose
pressed 2 diametrical opposite rollers which are located on
the shaft. Rollers is pushing oil, so it's a very small impact
on the oil to be transferred. The oil has been transferred
within the hose is 7 l / h with pressure up to 0.1 bar.

The oil sample has been excited so that each particle emits
or absorbs a certain amount of energy, that indicates the
concentration of particles in the oil.
Results represent the concentration of all dissolved and
metal particles. The results include a report on the size and
amount of particles as well as the temperature and relative
humidity.

633

OTEH2016

MEASURINGCLEANINGCLASSOFOILAFTERTRIBOLOGICALTESTING

Picture 4. Measuring the purity class


determination of the number of particles arithmetic mean, a
set of obtained data are shown in Table 1.

5. ANALYSIS OF THE RESULTS


Based on the results of measurements of oil-contaminated
particles leaded tin bronze alloy ZA27 CuPb22Sn1,5 and

Table 1. Class of cleanliness according to the standard ISO 4406:1999


Sample
CuPb22Sn1,510dN
CuPb22Sn1,515dN
CuPb22Sn1,520dN

ISO - to ISO - to ISO - to ISO - to particle size


4m
6m
14m 24m
up to 4m

particle size
up to 6m

particle size particle size up


to 24m
up to 14m

20

17

15

13

6264

822

193

75

19

16

14

12

4688

545

90

34

20

17

14

13

5866

707

140

62

ZA27-10dN

20

17

14

13

7006

976

134

48

ZA27-15dN

19

16

14

12

3968

530

93

40

ZA27-20dN

20

17

14

12

5988

676

97

37

Picture 6. Number of particles from 4m to 24m for


CuPbSn1,5 i ZA27 with loads 15dN

Picture 5. Number of particles from 4m to 24m for


CuPbSn1,5 and ZA27 with loads 10dN
634

OTEH2016

MEASURINGCLEANINGCLASSOFOILAFTERTRIBOLOGICALTESTING

Also, based on the tabulation (Table 1) it can be concluded


that the concentration of particles CuPb22Sn1,5 and alloys
ZA27 minimum load in the normal force Fn = 15 dN.

5. CONCLUSION
The paper analyzes the presence of particles in the
contaminated fluid after tribological tests leaded tin bronze
CuPb22Sn1,5 and alloy ZA27 the speed sliding V=1 m/s
and three different values of normal force Fn=10 dN, Fn=15
dN and Fn=20 dN.
Applying this measurement and combining the rationale
behind the analysis of fine and coarse particles in the
contaminated oil, depending on the mode of tribological
tests can also get complete image analysis worn particles of
a certain tribomechanical system. On this basis, it can be
concluded what measures and procedures should be
undertaken in the stage of construction and operation in
order to achieve an optimum service life and minimal
maintenance costs of the technical system.

Picture 7. Number of particles from 4m to 24m for


CuPbSn1,5 i ZA27 with loads 20dN
In Picture 5, 6 and 7 comparative results of the particle size
of the alloy of lead and tin bronze CuPb22Sn1,5 A27
depending on the normal force at tribological test lubricated
are presenting with histograms.

Modern methods of constructing technical systems must


take into consideration tribological processes that occur
during their use, but the tribological properly construct their
imperative.

After tribological tests on a universal measuring microscope


UIM-21 have been obtained measurement results widths
wear on contact surfaces leaded tin bronze alloy ZA27
CuPb22Sn1,5 and loads in the normal force Fn = 10 dN, dN
= 15 Fn Fn i = 20 dN. (Figure 8.)

This test may represent a diagnostic-forecasting technique


that provides a convenient way of assessing the exact online status lubricated parts in contact without switching the
technical system.

References
[1] Miroslav Medenica, Duan otra, Diagnostics and oil
treatment, Technical Diagnostics, VTSS, Vol. 9, No.
4, pp. 15-18, Belgrade, 2010.
[2] Miroslav Babi, Lubricants monitoring, Faculty of
Engineering, Kragujevac, 2004.
[3] Velibor Karanovi, Development of a solid particle
influence model on performance of pistoncylinder
contacting pairs for hydraulic components, Doctoral
dissertation, Faculty of Technical Sciences, Novi Sad,
2015.
[4] Standard: Hydraulic fluid power -- Fluids -- Method for
coding the level of contamination by solid particles,
SRPS ISO 4406:2005.
[5] Vladimir Savi, Mitar Jocanovi, Milija Krajinik, The
new approach for the need of hidraulic fluid, 8.
International Tribology Conference, Yugoslav
Tribology Society, pp. 198-201, Belgrade, 2003.
[6] BoschRexroth, Rexroth oil cleanliness boklet, 2011.
[7] Eaton Vickers, Systematic approach to contamination
control, 2002.
[8] Dragan Grgi, On-line monitoring of oil quality and
conditioning in hydraulics and lubrications systems,
10. International Conference on Tribology, pp. 305309, Kragujevac, 2007.

Picture 8. Tread wear leaded tin bronze CuPb22Sn1,5


ZA27 alloys in the function of normal force.
Taking into account the results of tribological tests and the
results of wear track width on the basis of the results of the
experiment and the number of particles conclude:
track width is greater in alloys ZA27 at normal force
Fn = 10 dN and Fn = 15 dN in a lower normal force
Fn = 20 dN,
at the normal force Fn = 10 dN track width is greater
in ZA27 alloy and the number of particles larger than
4 m and 6 m,
at the normal force Fn = 15 dN track width is greater
in ZA27 alloy and the number of particles larger than
14 m and 24 m,
at the normal force Fn = 20 dN track width is greater
in ZA27 alloy and the number of particles larger than
6 m, 14 m and 24 m.

635

RECYCLING LITHIUM - ION BATTERY


MILAN BUKVI
Ph.D student at Faculty of Engineering University of Kragujevac, milanbukvic76@gmail.com
RADOMIR JANJI
Technical Test Center, Belgrade, Ph.D student at Faculty of Engineering University of Kragujevac, lari32@mts.rs
BLAA STOJANOVI
Faculty of Engineering University of Kragujevac, blaza@kg.ac.rs

Abstract : Lithium-ion battery (LIB) applications in consumer electronics and hybrid and electric vehicles are rapidly
growing, resulting in boosting resources demand, including cobalt and lithium. So recycling of batteries will be a necessity,
not only to decline the consumption of energy, but also to relieve the shortage of rare resources and eliminate the pollution
of hazardous components, toward sustainable industries related to consumer electronics and hybrid and lectric vehicles.
Analysing recycling processes of spent LIBs, the structure and components of the batteries are introduced, and all available
single contacts in batch mode operation are summarized, including pretreatment, secondary treatment, and deep recovery.
Additionally, many problems and prospect f the current recycling processes will be presented and analyzed. The aim of this
paper is to stimulate further interest in spent LIBs recycling and in the appreciation of its benefits.
Keywords: lithium-ion battery, problems, prospect, recycling, waste.
lithium - ion batteries will be systematically introduced in
the pre-treatment recycling of batteries, secondary treatment
in the form of separation of the various components of the
battery and the final (deep) the process of recycling of waste
lithium - ion battery, with a presentation of the main
problems and challenges, the current recycling technologies,
as well as future prospects in the recycling of lithium - ion
batteries [4].

1. INTRODUCTION
The increase in demand for energy in electric and electronic
devices as well as power hybrid and electric vehicles
significantly increases battery consumption and therefore
the use of materials that produce long-term increase in the
amount of hazardous waste [1].
Similarly, electronic and electrical devices and lithium - ion
batteries have been discarded at the stage of completion of
the life cycle, passing from the global "electronic wonders
of technology" to "electronic waste" in the absence of
adequate policies and feasible and economically viable
technology, which allows for adequate recycling of
batteries. Thus, recycling and recovery of the main
components of used lithium - ion batteries seems that right
now is the optimal way to prevent environmental pollution
and consumption of raw materials, or rather, a waste of rare
and valuable raw materials [2].

2. STRUCTURE OF LITHIUM - ION


BATTERY
Unlike conventional batteries, lithium-ion batteries do not use
the reduction process for the creation and accumulation of
electricity. Instead, the lithium ions move between the anode
and cathode, forcing electrons to move with them. Basically,
the lithium-ion batteries consist of: cathode, anode, electrolyte
and separator. In addition, inevitably, these types of batteries
and have a protective metal sheath, the protective plastic
element and the electronic control unit [5-7].

Therefore, for a complete overview of the current state and


future prospects in the recycling of waste lithium - ion
battery, it is necessary to first examine the structure and
configuration of said batteries, in order to determine the
most appropriate process of separation and mechanical
treatment, by analyzing the available incentives for
recycling of waste lithium - ion battery, including the
amount of waste lithium - ion battery, the implementation of
environmental protection measures from the aspect of the
use and recycling of lithium - ion batteries, as well as the
extraction and utilization of scarce materials in the
construction of a lithium - ion battery, that should be
separated recycling process, whereby in the first place
thinking of lithium and cobalt [3], [4].

For making battery anodes commonly used carbon. In practice,


this of applied carbon material has been preferably based on
copper which is fixed a separate polymeric agents [8].
The basis for making battery cathodes used on aluminum,
while as an active material used is usually a wider range of
composite materials, that contain lithium, mostly in the form
of oxides. However, in the application of the various forms
of the lithium composite, for example lithium cobalt oxide
(LiCoO2), lithium manganese oxide (LiMn2O4), lithium
nickel oxide (LiNiO2), lithium vanadium oxide (LiV2O3), as
well as admixtures of lithium composites of cobalt, iron and
phosphorus, ie. Li(NiCoMn)O2 and LiFePO4. However, the
most common material for the production of cathodes

Through the description of the recycling process of waste


636

OTEH2016

RECYCLINGLITHIUMIONBATTERY

electrolyte. Because the voltage of the lithium ion battery


(~3,6 V) is higher than the standard voltage electrolysis of
water (1.23 V at 25 C), it is necessary that the application
elements do not contain water [10]. Figure 1 shows a
schematic representation of components of various forms of
lithium - ion batteries.

represents a lithium cobalt oxide, primarily due to its good


performance, referring to the high level of capacity, which
provide, as well as the short charging time, a relatively long
discharge time of battery [8], [9].
The electrolyte in batteries provides the movement of
electrons between the electrodes, while showing an
environment in which chemical energy has been converted
into electrical energy. Basically, the electrolyte is an organic
liquid with a touch of the dedicated and in the application of
the following electrolyte salt LiPF6, LiBF4, LiCF3SO3 or
Li(SO2CF3)2, with the most common LiPF6 as the

The separator provides the interface between the anode and


the cathode, and prevents the occurrence of a short circuit
by direct contact between the electrodes and constitutes the
microfilm usually made of polymers, such as polyethylene
or polypropylene.

Figure 1 - Schematic representation of the components of various forms of lithium - ion battery, a - cylindrical, b flat, c - prismatic, d - thin and flat [11]
very valuable raw materials (elements) for the production of
batteries and the limited resources of raw materials applied
in this type of battery, inevitably, the need for application of
highly profitable, in every respect, the process recycling of
lithium - ion batteries.

3. GENERAL ON THE PROCESS OF


RECYCLING LITHIUM - ION BATTERY
In light of the increasing use of lithium - ion batteries,
growing awareness of environmental protection, the use of

Figure 2 The complete process of recycling of waste lithium - ion battery


637

OTEH2016

RECYCLINGLITHIUMIONBATTERY

In Figure 2 is shown schematically one of the possible


methods of recycling lithium - ion battery, with a clearly
defined three basic stages of the recycling process.

mechanical, and manual (manual) separation of the


individual components and materials. Manual separation of
the components generally and more frequently used for the
separation of plastic and metal material, generally the
battery housing, that is shown schematically in Figure 3 [1618].

Phase pretreatment of recycling batteries has been directed


to the removal of certain hazardous materials and separation
of individual components of the battery, to the greatest
extent possible. During the second phase of recycling takes
place the main process for the separation of materials from
the battery charger and the process of decomposition. At the
end of the final (deep) recycling process is focused on the
extraction of valuable and very rare material (eg. the
elements manganese and nickel), which can be used in
making new batteries, and other products that contain these
valuable materials. After carrying out the three phases of
recycling a lithium - ion batteries have been obtained mainly
metals, but also to metals and copper, aluminum, iron,
cobalt, lithium, nickel, manganese, carbon, and various
plastics. Generally, metals such as iron, aluminum and
copper obtained in a pure, elemental form, whereas cobalt,
nickel, lithium and manganese have been usually obtained
in the form of different compounds, e. CoSO4, CoCO3,
CoC2O4, Co3O4, LiCoO2, LiCoO2 and Li2CO3 [12-15].

The anode and cathode can be manually separated after


removal of the protective shell (casing) of the terminals,
followed by the same drying for 24 hours at a temperature
of 60 C. All actions in the pretreatment have been
performed by trained personnel, with the mandatory use of
protective equipment (goggles, gloves, protective breathing
masks) [19]. For example, during this process, the largest
amount of copper provides in fractions of a size greater than
0,59mm, and copper is separated from the battery in an
amount up to 93.1%. In order to increase the efficiency of
the process, mechanical separation combined with crushing
and reviewing obtained fragments battery. During the final
step of pre-treatment with the help of thermal processes,
such as pyrolysis and incinerating stand out unwanted
materials, in order to obtain the highest possible purity
substances entering the secondary treatment of recycling
lithium - ion battery, where for example, first cathode parts
are heated up to 150-500C for a period of 1 hour to burn
off the binder of materials and organic impurities, and then
to 700-900C for a period of 1 hour, to remove primarily of
carbon [20].

4. PRETREATMENT OF RECYCLING
LITHIUM - ION BATTERY
Lithium - ion batteries are generally so complex and
sensitive structures to direct implementation pirometallurgy
and hidrometallurgy procedures were extremely inefficient,
that have been first applied pretreatment battery to prevent
damage to the loss of very valuable materials. In order to
prevent short-circuit the batteries, these must be completely
discharge. Pretreatment battery has been carried out by

Especially in recent years developed an advanced procedure


that involves vacuum pyrolysis combined with
hydrometallurgical technique to separate the cobalt and
lithium from batteries as much as possible, and what
percentage of higher purity [21]

Figure 3 - Schematic representation of pretreatment of recycling lithium - ion batteries [3]

638

OTEH2016

RECYCLINGLITHIUMIONBATTERY

process of battery recycling this composite has been


separated by heating to 100C, with simultaneous separation
of Al and Cu [24].

5. SECONDARY TREATMENT OF
RECYCLING LITHIUM - ION BATTERY
After completion of the pretreatment recycling of lithium - ion
batteries are still a certain amount of anode and cathode
material havent been separated from Al and Cu foils. Figure 4
shows schematically secondary treatment recycling of used
lithium - ion batteries that principally distinguished Cu, Cu
solution, Al, Al solution, cobalt and carbon solution [10].
Through a very controlled and progressive hydrothermal
process can be consolidated LiCoO2 separation of waste
batteries and regeneration of valuable compounds for the
production of new batteries. During this procedure, using
concentrated LiOH, at a temperature of 200C with a
gradual increase of temperature of 3C / min [22].

In biological melting as one of the procedures that has been


divided by individual scarce materials in the recycling of
batteries, a key micro-organisms as iron oxide and sulfur
oxide bacteria (lat. Acidithiobacillus ferrooxidans,
Thiobacillus ferrooxidans) that show a remarkable tendency
to allocate and preserve precious metals, a particularly
cobalt and lithium. This process is aided by certain
compounds, such as: (NH4)2SO4, K2HPO4, MgSO47H2O,
CaCl22H2O, FeSO47H2O, and the purity of obtained cobalt
is about 98%, while the purity of lithium is about 72% [15].
Melting using acids, compared to the previous year, the
most common procedure in the allocation of materials for
the cathode of lithium - ion batteries. Melting may be
carried out with conventional acids, such as sulfuric acid
(H2SO4), hydrochloric acid (HCl) and nitric acid (HNO3), as
well as certain organic acids, the most common of which is
citric acid [25].

Ultrasonic treatment has been mostly used for the separation


of cathode materials of the Al film [23].
NMP represents a five-layered chemical composite material
that codes for lithium - ion batteries used for power increase
adhesion of Al and Cu foils. In the secondary treatment

Figure 4 - Secondary treatment recycling of used lithium - ion battery


manganese oxide (MnO2) and manganese hydroxide.
Remainder of the process has been allocated nickel, by the
compounds dimetilglioxil. At the end of the process in the
universal stands, and lithium in the form of a compound of
Li2CO3. Cleanliness of the obtained element in this process
is as follows: 96.97% for lithium, manganese for 98.23%,
96.94% for cobalt and nickel 97.43% [26].

6. FINAL (DEEP) TREATMENT OF


RECYCLING LITHIUM - ION BATTERY
Final (deep) treatment of recycling lithium - ion battery
combines the processes of solvent extraction, precipitation,
electrolysis, crystallization and kalcinizacije, to obtain the
highest purity materials, as well as those most valuable
materials, that have failed to obtain the previous phases of
recycling batteries (manganese, nickel and cobalt) and has
been shown schematically in Figure 5 [10].

Improved the final process of recycling lithium - ion battery,


which has been developed in recent years is the
pyrometallurgical process in which it is used and controlled
electric arc, appropriate characteristics, with the aim of
obtaining the finest fraction of lithium and cobalt [27].

Universal finishing the process of recycling lithium - ion


battery, that is the largest application, reflected in a process
in that first allocates manganese in liquid solution, to give a

639

OTEH2016

RECYCLINGLITHIUMIONBATTERY

Figure 5 - Deep (final) treatment recycling of used lithium - ion battery.


choices. Science 334, 928935.
[4] Lankey, R. L., and McMichael, F. C. (2000). Life-cycle
methods for comparing primary and rechargeable
batteries. Environmental Science & Technology 34,
22992304.
[5] Alper, J. (2002). T he battery: Not yet a terminal case.
Science 296, 12241226.
[6] Xu, K . (2004). Nonaqueous liquid electrolytes for
lithium-based rechargeable batteries. Chemical
Reviews 104, 43044417.
[7] Ekermo, V. (2009). Recycling o pportunities for
lithium-ion batteries from hybrid electric vehicles .
Chalmers
University
of
Technology,
https://www.chalmers.se/chem/EN/divisions/indstrialrecycling/finished-project/recyclingopportunities/downloadFile/attachedFilef0/Recycling
opportunities for Li-ion. pdf?nocache=1294145371.31.
[8] Kang, K. (2006). Electrodes with high power and high
capacity for rechargeable lithium batteries. Science
311, 977980.
[9] Stephan, A. M. (2006). R eview on gel polymer
electrolytes for lithium batteries. European Polymer
Journal 42, 2142.
[10] Xianlai Zenga, Jinhui Lia & Narendra Singha,
Recycling of Spent Lithium-Ion Battery: A Critical
Review, 2014.
[11] Tarascon, J.-M., and Armand, M. (2001). Issues and
challenges facing recharge-able lithium batteries.
Nature 414, 359367.
[12] Mishra, D., K im, D. J., Ralph, D., Ahn, J. G., and
Rhee, Y. H. (2008). Bioleaching of metals from spent
lithium-ion secondary batteries using Acidithiobacillus
ferrooxidans. Waste Management 28, 333338.
[13] Zeng, G., Zou, J., Peng, Q., Wen, Z., and Xie, Y.
(2009). An efficient breeding strains of bioleaching
cobalt and lithium from spent lithium-ion battery.
Chinese patent no. CN 101,570,750 A.

7. CONCLUSION
The increasing use of electronic equipment and electrical
machinery, and electric and hybrid vehicles inevitably
causes the increasing requirements regarding the use of rare
and expensive materials, such as cobalt, lithium, copper or
aluminum, which are used in the preparation of lithium - ion
batteries, that are the main source of electrical power in
these machines and appliances. On the other hand, used
lithium - ion batteries may explode or leak and cause
damage to human health or the environment pollution in
case of improper disposal or further treatment, after
completing the life cycle.
As the area of further research in the field of recycling of
waste lithium - ion battery, could be used as follows:
shorten battery recycling process while retaining a high
percentage purity of sorted materials,
introduction of more automated and software controlled
pretretmant process of recycling lithium - ion battery,
development of sophisticated separation techniques
particularly rare materials, primarily within the secondary
and final treatment recycling of batteries,
development and improvement of the system of collecting
used batteries, that are the subject of recycling, both in
technical - technological point of view, and in the legal normative aspect in terms of legal regulations and other
regulations.

References
[1] Armand, M., and Tarascon, J.-M. (2008). Building
better batteries. Nature 451, 652657.
[2] Wakihara, M. (2001). Recent developments in lithium
ion batteries. Materials Science and Engineering R33,
109134.
[3] Dunn, B., Kamath, H., and Tarascon, J. M. (2011).
Electrical energy storage for the grid: A battery of
640

OTEH2016

RECYCLINGLITHIUMIONBATTERY

[14] Xin, B., Zhang, D., Zhang, X., Xia, Y ., Wu, F ., Chen,
S., and Li, L. (2009). Bioleaching mechanism of Co
and Li from spent lithium-ion battery by the mixed
culture of acidophilic sulfur-oxidizing and ironoxidizing bacteria. Bioresource Technology 100,
61636169.
[15] Zeng, G., Deng, X., Luo, S., Luo, X., and Zou, J.
(2012). A copper-catalyzed bioleaching process for
enhancement of cobalt dissolution from spent lithiumion batteries. Journal of Hazardous Materials 199200,
164169.
[16] Li, L., Lu, J., Ren, Y., Zhang, X. X., Chen, R. J., Wu, F
., and Amine, K . (2012). Ascorbic-acid-assisted
recovery of cobalt and lithium from spent Li-ion
batteries. Journal of Power Sources 218, 2127.
[17] Li, L., Chen, R., Sun, F., Wu, F., and Liu, J. (2011).
Preparation of LiCoO2 films from spent lithium-ion
batteries by a combined recycling process.
Hydrometallurgy 108, 220225.
[18] Ferreira, D. A., Prados, L. M . Z., Majuste, D., and
Mansur, M . B. (2009). Hydrometallurgical separation
of aluminium, cobalt, copper and lithium from spent
Li-ion batteries. Journal of Power Sources 187, 238
246.
[19] Li, D., Wang, C., Chen, Y ., Jie, X., Y ang, Y., and
Wang, J. (2009). Leaching of valuable metals from
roasted residues of spent lithium-ion batteries. The
Chinese Journal of Process Engineering 9, 264269.
[20] Fouad, O. A., F arghaly, F . I., and Bahgat, M. (2007).
A novel approach for synthesis of nanocrystalline -

LiAlO2 from spent lithium-ion batteries. Journal of


Analytical and Applied Pyrolysis 78, 6569.
[21] Sun, L., and Qiu, K. (2011). Vacuum pyrolysis and
hydrometallurgical process for the recovery of valuable
metals from spent lithium-ion batteries. Journal of
Hazardous Materials 194, 378384.
[22] Kim, D.-S., Sohn, J.-S., Lee, C.-K., Lee, J.-H., Han, K.S., and Lee, Y.-I. (2004). Simultaneous separation and
renovation of lithium cobalt oxide from the cathode of
spent lithium ion rechargeable batteries. Journal of
Power Sources 132, 145149.
[23] Li, J., Zhao, R ., He, X., and Liu, H. (2008).
Preparation o f LiCoO2 cathode materials from spent
lithiumion batteries. Ionics 15, 111113.
[24] Lu, X., Lei, L., Y u, X., and Han, J. (2007). A separate
m ethod for components of spent Li-ion battery.
Battery Bimonthly 37, 7980.
[25] Li, L., Ge, J., Wu, F ., Chen, R ., Chen, S., and Wu, B.
(2010). Recovery of cobalt and lithium from spent
lithium ion batteries using organic citric acid as
leachant. Journal of Hazardous Materials 176, 288
293.
[26] Wang, R .-C., Lin, Y .-C., and Wu, S.-H. (2009). A
novel recovery process of metal values from the
cathode active materials of the lithium-ion secondary
batteries. Hydrometallurgy 99, 194201.
[27] Georgi-Maschler, T., Friedrich, B., Weyhe, R., Heegn,
H., and Rutz, M . (2012). Development of a recycling
process for Li-ion b atteries. Journal of Power Sources
207, 173182.

641

NUMERICAL CALCULATION OF J-INTEGRAL USING FINITE


ELEMENTS METHOD
BAHRUDIN HRNJICA
University of Biha Faculty of Technical Engineering Biha, Biha, bahrudin.hrnjica@unbi.ba
FADIL ISLAMOVI
University of Biha Faculty of Technical Engineering Biha, Biha, f.islam@bih.net.ba
DENANA GAO
University of Biha Faculty of Technical Engineering Biha, Biha, dzgaco@bih.net.ba
ESAD BAJRAMOVI
University of Biha Faculty of Technical Engineering Biha, Biha, bajramovic_e@yahoo.com

Abstract: This paper presents the procedures and results of numerical determination of fracture mechanics parameters in
condition of elastic-plastic fracture mechanics. The finite element analyses were performed on standard Single-edge notched
bend, SENB, specimen of structural steel. Discretization was performed on three specimens with different crack lengths. The
results of numerical analysis were compared with the experimental results, in order to evaluate numerical models. The
results of analysis show good approximation of numerically determined J integral compared with experimental results.
Keywords: fracture mechanics, J integral, finite elements method, numerical integration, SENB specimen.
equations, and only differ in the type of variables, initial and
boundary conditions[1-6].

1. INTRODUCTION
Finite Element Method or FEM has become the recognized
method and an integral part of every CAE (Computer Aided
Engineering) system. It is a useful method in the analysis of
many complex systems in which we live. In the beginning
of its development FEM was considered as an expanded
matrix method of structural analysis, i.e. analysis of stressstrain state of the structure [1].

FEM has been successfully used in fracture mechanics from


its very beginnings. This paper will apply finite element
method in calculating the J integral, parameters of elasticplastic fracture mechanics. Fracture mechanics parameters
are generally obtained experimentally, but such experiments
are not cheap to be conducted on a daily basis for each
individual case[3,7].

The rapid development of FEM mostly is due to the rapid


and successful development of computer science and
software development in which algorithms based on FEM
were successfully implemented. Software packages for FEM
provided accurate and convenient tool to calculate the
system of algebraic equations, which form one of the basic
features of FEM.

Efforts to replace, to some extent, the experimental method


with numerical methods is a logical approach. With the
development of computer science, hardware and software
solutions it is possible to simulate almost all natural
processes. Therefore, interest has increased in the
development of a software module to calculate the J integral
as one of the most important parameters of fracture
mechanics[5].

Today FEM is applied in a wide range of applied sciences,


from engineering and bio-mechanics to electromagnetic
fields and thermodynamics. FEM is not only a method of
simple linear elastic stress-strain analysis, but has proved to
successfully solve elastic-plastic problems, problems of
complex non-linear, non-stationary dynamics, non-linear
and non-stationary heat transfer, fluid mechanics, biomechanics and other [2].

Today, in the latest versions of Ansys, Abaqus and other


popular software packages for numerical analysis there are
also modules for calculating the J integral, as well as other
parameters of fracture mechanics[8-10].

The application of finite element method today is widespread


primarily due to ease of implementation and numerical
approach to solving differential equations. Finite element
method can be used wherever it is necessary to solve ordinary
and partial differential equations. The wide application of the
method is confirmed by the fact that natural phenomena are
very similar because they are described by the same differential

642

2. NUMERICAL DETERMINATION OF J
INTEGRAL
Numerical determination of fracture mechanics parameters
has been developed for the last 40 years, during which a
dozen methods were developed. Generally, all methods can
be divided into point matching methods and energy
methods. The first methods determine the stress intensity
factor over the stress and strain in the structure, while others

OTEH2016

TITLEOFTHEPAPERINENGLISH

determine the strain energy that was used to calculate the


stress intensity factor.

Stress vector T and movement growth is obtained as scalar


product:

The advantage of the energy method is that it can be applied


to non-linear areas, while the disadvantages are
impossibility of separating the energy of strain according to
the models of crack separation. All of today's modern
numerical methods for the determination of fracture
mechanics parameters are based on the method of energy
domain integral [3].

Ti

Contour differential is calculated as follows:


2

2
u y
u
d = x +
d ,
x y

Due to the fact that current software for stress analysis


calculate stress-strain conditions in integral, i.e. Gaussian
points, numerical calculation of J integral is based on these
points. For this reason, the contour around the crack tip is
observed in the integral, not the finite elements nodes [4].
Numerical calculation of J integral begins with known
expression for the contour integral.

(5)

for d =const.
If expressions (2), (3), (4) and (5) are inserted in (1) we get
an expression which is suitable for integration. The contour
through which the integration is conducted has been chosen
in order to fit the elements of stiffness matrix. For this
reason, the contour must pass through the Gaussian
integration points. All expressions included in the equation
(1) are known and can be obtained directly through the
standard programs for numerical analysis.

The general form of J integral, where axis x is parallel to


crack growth direction:
du
J = = Ws dy Ti i d
a
dx

u y
ui
u
. (4)
= ( n + xy n2 ) x + ( xy n1 + yy n2 )
x xx 1
x
y

(1)

The evaluation of the final expression is done through


integration points along the contours (Figure 1), so we get a
discrete form of equation (1):

where stands that:


Ti stress vector,
ui movement vector,
W strain energy,

ng

J=

d contour arch length.

W I ( , )
g g

(6)

g =1

where stands that:


Wg Gaussian weight factor,
ng number of integration points,
Ig integrand calculated for each integration point g.
Index g denotes that the values contained in the expression
(6) are taken in the Gaussian integral points, and not in the
nodes of the finite element[3].

3. THE RESULTS OF THE NUMERICAL


ANALYSIS
The paper presents the numerical determination of J integral
for SENB specimen with initial fatigue crack of length a0.
Numerical analysis was conducted over three SENB
specimens of the same dimensions, which are in the process
of fatigue with different values of the initial fatigue crack.
Initial fatigue crack of SENB specimen was geometrically
processed according to standard ASTM 1820 [11].
Specimen modelling was conducted based on geometric
characteristics, as well as finite element network. Numerical
determination of J integral was conducted based on
experimental determination of J integral so that load input
data, crack length and mechanical properties of material
were simulated. As previously noted, finite element mesh
was formerly conducted around the crack tip, which was
defined through a singular finite elements.

Figure 1 Finite elements around crack tip and defining


closed area around the crack tip
By defining each component of J integral presented in
expression (1) we can get a suitable analytical expression
for J integral that can be numerically calculated.
Strain energy can be described as:

u y

u u y u x
u
Ws = 1 xx x + xy x +
x + yy y . (2)
x

2
y
x

y axis growth is described as:


dy =

y
d .

(3)

643

OTEH2016

TITLEOFTHEPAPERINENGLISH

Figure 4 Description of numerical and experimental


values of J integral for SENB-1 specimen[5]

Figure 4 presents comparative results of numerical analysis


and experimental results for the specimen of the same
geometrical values.

Figure 2 2D numerical model of the SENB specimen


with generated finite elements mesh

Figure 2 shows the model of the SENB specimen with


generated mesh. As can be seen, the specimen was modeled
as a 2D model, whereas finite element mesh was built from
quadrangle or quadrilateral finite elements comprising eight
nodes. Four nodes are at the apex of the rectangle, while the
other four nodes are located in the middle of the sides (see
Figure 2). Singular finite elements enables greater precision
and interpolation functions can better describe the change in
load and other sizes through the nodes of finite elements. As
for the crack tip, rectangular elements degenerate into
triangular (Figure 2).
The nodes that are located in the middle of the sides that are
connected in the crack tip, move up to in the linearelastic, and in the elastic-plastic fracture mechanics. This
defines the degree of singularity that is 1/r in a linearelastic, and 1/r in the elastic-plastic analysis. Previously
conducted pre-processing implied the variation of crack size
at a specified interval, according to the experimental results
of crack length and corresponding load.

Figure 5 Description of numerical and experimental


values of J integral for SENB-2 specimen [5]

Figure 3 Stress distribution during experimental J integral


determination

As can be seen in Figure 3, the numerical analysis of stress


and strain was conducted prior to the numerical
determination of J integral, followed by the process of
numerical determination of J integral for integral, and not
for finite elements nodes.
Figure 6 Description of numerical and experimental
values of J integral for SENB-3 specimen [5]

644

OTEH2016

TITLEOFTHEPAPERINENGLISH

Results of numerical analysis in Figures 5 and 6 depict a


comparison of the results with the corresponding
experimental values of J integral, which show that the
numerical results of the J Integral acquired lower values
compared to the experimental results. It may also be
observed that in all three specimens the results are closer to
the experimental results for lower values of the ratio of
crack and specimen thickness (a/w), or lower values of the
crack size a.

References
[1] Bathe,K.J.: Finite Element Procedures, Prentice Hell,
New Jersey, 1996.
[2] Anderson,T.L.: Fracture Mechanics Fundamentals and
Application Third Edition, Taylor & Francis, New
York, 2005.
[3] Mohammadi,S.: Extended Finite Element Method,
Teheran, Blackwell Publishing Ltd, 2008.
[4] Reddy,J.N.: An Introduction to the Finite Element
Third Edition, Higher Education, New York, 2006
[5] Hrnjica,B.: Numeriko-evolucijski pristup odreivanja
parametara mehanike loma posuda pod pritiskom,
Doktorska disertacija, Univerzitet u Bihau Tehniki
fakultet Biha, 2014.
[6] Rao.S.S.: The Finite Element Method in Engineering
Fifth Edition, Elsevier, New York, 2011.
[7] Kuna,M.: Finite Element in Fracture Mechanics,
Springer, New York, 2013.
[8] Long,Y., Cen,S., Ling,Z.: Advanced Finite Element
Method in Structural Engineering, Springer, New
York, 2009.
[9] Madaenci,E., Guven,I.: Finite Element Method and
Application in Engineering using Ansys, Springer, New
York, 2006.
[10] Moaveni,S.: Finite Element Analysis Theory and
Application with Ansys, Prentice-Hal Inc, New Jersey,
1999.
[11] ASTM E 1820: Standard Test Method for Measurement
of Fracture Toughness (American Society for Testing
and Materials, Philadelphia, 1990.

5. CONCLUSION
This paper presents a numerical analysis of the J integral on
SENB specimens of different values of the initial fatigue
crack. The results of numerical analysis show that the
numerical results of the J integral are closer to the
experimental values when the crack length is lower. The
experimental approach in determining the J Integral presents
basic, accurate and safest method, which due to the
complexity and cost of conducting the experiment, in certain
cases represent a very demanding process. Determination of
J integral with finite elements method presents solid
alternative, and it can be used in combination with
experimental method. The importance of numerical
determination of fracture mechanics parameters is indicated
by the fact that most of today's software for numerical
analysis, such as Ansys, Abaqus, etc., have modules for
numerical determination of fracture mechanics parameters:
J Integral, stress intensity factor, strain energy, as well as
other features of fracture mechanics.

645

INFLUENCE OF DIFFERENT TYPES OF POLYMER IMPREGNATION ON


SPECTRAL REFLECTION OF TEXTILLE MATERIALS
ALEKSANDRA SAMOLOV
Military Technical Institute, Ratka Resanovia 1, Belgrade, aleksandrasamolov@yahoo.com
MILAN KULI
Military Technical Institute, Ratka Resanovia 1, Belgrade, milan_kulic@hotmail.com

Abstract: Camouflage characteristics of textile materials are of extreme importance especially when production of
personal equipment is in question. This paper presents results of spectral reflection measurements in the wavelength
area from 400 nm to 1000 nm, of dyed polyamide cloth samples dyed with specific camouflage dyes and impregnated
with polyurethane and chlorosulfonated polyethylene. The comparative results were given for the light green, beige
green, dark green, brown and black tone. Apart from this, spectral reflection of non-impregnated polyamide cloth
samples was measured as well. The results show that the best camouflage characteristics are those of polyurethane
impregnated cloth samples.
Keywords: spectral reflection, polyamide, polyurethane, chlorosulfonated polyethylene.
adjacent to the surface (Lambertian reflectance).

1. INTRODUCTION

The authors have no knowlidge of previous similar


reseach, esspecially research that had military
applications.

Camouflage characteristics of different types of materials


are very significant in view of protecting military
equipment. Pontoon bridges, motor vehicles, tanks,
different types of personal equipment are just some
examples of resources that are subject to mandatory
protection with camouflage colours.

2. EXPERIMENTAL PART
The experimental part was performed in the Laboratory
for the Examination of Textile, Leather and Footwear
within the Military Technical Institute of Belgrade,
Department of Materials and Protection. The
measurements were conducted using the UV/VIS/NIR
spectrophotometer UV 3600 from a Japanese
manufacturer Shimadzu with an integrating sphere [4].
The samples were measured in the visible and nearinfrared area of the electromagnetic spectrum (4001000 nm).

With regard to personal equipment, the most dominant


issue is appropriate camouflage protection of soldiers'
uniforms. Therefore, dying the materials used for these
purposes is the topic of rising importance. The technology
and the methods of textile dying have long been familiar.
The most attention was always given to the permanence
of colour, i.e. the ways of obtaining the constant colour
for textile dying [1, 2], whereas recently the interest has
spread to the development of the so-called colourchanging fabrics [3].

The material from which the samples used in this


experiment were made is a polyamide fabric (type 6.6)
with a digital camouflage pattern of different
impregnation. One type of samples was impregnated with
standard polyurethane and the other using a
chlorosulfonated polyethylene by the commercial name of
Hypalon [5]. The values of diffuse reflection were
measured for the black, brown, dark green, light green
and beige green shades. In addition to these samples, the
values of diffuse reflection were measured in fabric
samples dyed in camouflage colours with no
impregnation containing identical ingredients (type 6.6
polyamide).

Apart from the colour, camouflage characteristics of the


material can also be influenced by material impregnation
which is most commonly used to improve physical and
mechanical properties of textile. A wide range of
polymers are used for these purposes. For the sake of this
paper, we examined samples impregnated with
polyurethane (PU) and chlorosulfonated polyethylene
(CSM). The purpose of this paper was to investigate the
influence of textile impregnation as a process on the value
of diffuse reflection within visible and near-infrared
region of the electromagnetic spectrum. Diffuse reflection
is the reflection of light from a surface such that an
incident ray is reflected at many angles rather than at just
one angle as in the case of specular reflection. An
illuminated ideal diffuse reflecting surface will have equal
luminance from all directions which lie in the half-space

As we wanted to minimize the effects of the dying


process and the dye itself to be able to examine the
influence of impregnation, we used samples treated with
the same type of dyes applied in the same way.

646

INFLUENCEOFDIFFERENTTYPESOFPOLYMERIMPREGNATIONONSPECTRALREFLECTIONOFTEXTILLEMATERIALS

3. RESULTS

OTEH2016

beige green-PU
beige green-CSM
beige green-untreated

Figures 1-5 show dependency graphs for diffuse


reflection as the function of wavelength in the area of
400-1000 nm for the light green, dark green, beige green,
brown and black tone, respectively with the impregnation
for treated fabrics and untreated fabrics.

R (%)

40

20

light green-PU
light green-CSM
light green-untreated
40

600

900

R (%)

wavelength (nm)

Figure 3. Dependency graph for diffuse reflection in the


range between 400-1000 nm for the beige green tone
(fabrics treated with PU, CSM and untreated fabrics)

20

30

600

brown tone-PU
brown tone-CSM
brown tone-untreated

900

wavelength (nm)
20

R (%)

Figure 1. Dependency graph for diffuse reflection in the


range between 400-1000 nm for the light green tone
(fabrics treated with PU, CSM and untreated fabrics)

10

40

dark green-PU
dark green-CSM
dark green-untreated
0
600

900

R (%)

wavelength (nm)

Figure 4. Dependency graph for diffuse reflection in the


range between 400-1000 nm for the brown tone (fabrics
treated with PU, CSM and untreated fabrics)

20

12

black tone-PU
black tone-CSM
black tone-untreated
0
600

900

R (%)

wavelength (nm)

Figure 2. Dependency graph for diffuse reflection in the


range between 400-1000 nm for the dark green tone
(fabrics treated with PU, CSM and untreated fabrics)

600

900

wavelength (nm)

Figure 5. Dependency graph for diffuse reflection in the


range between 400-1000 nm for the black tone (fabrics
treated with PU, CSM and untreated fabrics)
647

INFLUENCEOFDIFFERENTTYPESOFPOLYMERIMPREGNATIONONSPECTRALREFLECTIONOFTEXTILLEMATERIALS

OTEH2016

except for the brown tone. Some data found in literature


show that impregnation is accompanied by a chemical
interaction between polymers, i.e. the creation of new
chemical bonds and the destruction of the old ones, which
shall be possible to confirm after further analysis.

4. DISCUSSION
As can be seen on the graph (Fig. 1, 2 and 3) the curves
showing the dependency of reflection on the wavelength
in the VIS-NIR area are similar for all three shades of the
green tone. The highest values were obtained for the
fabric treated with polyurethane, whereas the values
obtained for the untreated fabric and the one treated with
chlorosulfonated polyethylene were almost identical.

Whether a fabric impregnated with polyurethane or


chlorosulfonated polyethylene is to be used for the
creation of personal equipment shall depend on the fact
which fabric has better physical and mechanical
properties.

With the brown tone (Fig.4) we have the opposite


situation. For the polyurethane treated fabric, the
reflection values are the lowest, however, in the fabric
treated with chlorosulfonated polyethylene and the
untreated fabric, a similar pattern of reflection curves was
noted.

ACKNOWLIDGEMENT
This paper was supported by the Ministry of Education,
Science and Technological Development of the Republic
of Serbia (project TR34034).

The black tone (Fig.5), shows similar behaviour as the


shades of the green tone. The highest reflection values
were obtained for the fabric impregnated with
polyurethane, whereas the untreated fabric and the one
treated with chlorosulfonated polyethylene have similar
values. What is interesting about this shade are multiple
intersections with CSM treated fabric and untreated
fabric. This pattern was also noticed in the brown tone
(Fig. 4) but even more notably for the black tone (Fig. 5).

References
[1] Zollinger,H.: Color chemistry: synthesis, properties,
and applications of organic dyes and pigments,
VHCA en WILEY-VCH; Zurich, 2003, 1-10.
[2] Kim,S.: Functional dyes, Elsevier; 2006, 1-45.
[3] DeMeyer,T.,
Steyaert,I.,
Hemelsoet,K.,
Hoogenboom,R., VanSpeybroeck,V., DeClerck,K.:
Halochromic properties of sulfonphthaleine dyes in a
textile environment: The influence of substituents,
Dyes and Pigments, 124 (2016) 249-257.
[4] Shimadzu, UV 3600 Tutorial, 2009.
[5] http://www2.dupont.com/Phoenix_Heritage/en_US/1
996_detail.html
[6] Sheethu,J., Mundlapudi,L.R.: Lanthanum-strontium
copper silicates as intense blue inorganic pigments
with high near-infrared reflectance, Dyes and
Pigments, 98 (2013) 540-546.
[7] Markovi,G.,
Marinovi-Cincovi,M.,
Radovanovi,B., Budimski-Simendi,J.: Studies of
chemical interactions between clorosulphonated
polyethylene and nitrile rubber, VI Symphosium
Contemporary Technologies and Economic
Development, Leskovac, 21-22 October 2005.
[8] Yeqiu,L., Jinlian,H., Yong,Z., Zhuohong,Y.: Surface
modification of cotton fabric by grafting of
polyurethane, Carbohydrate Polymer,s 61 (2005)
276280.

Diffuse reflection as a quantity depends primarily on the


colour as different shades are applied differently and react
differently with fabrics, as it has been said earlier [6].
However, the effect of impregnation should not be
neglected either, as demonstrated by the results presented
herein.
There are reports in literature that when CSM reacts with
nitrile rubbers at raised temperatures, new chemical bonds
are made and the old ones broken [7]. This is what likely
happens in the case of textile as well, although it is
necessary to perform further analysis to prove this claim
(FTIR, SEM, etc.). [7]. This has also been established in
the case when polyurethane is impregnated on cotton,
therefore, it can be inferred that the same is true for
synthetic fabrics [8].

5. CONCLUSION
As the presented results of this initial research show,
impregnation affects the value of diffuse reflection. Tests
have shown that adding polyurethane better affects this
property as the values of diffuse reflection were higher,

648

QUALITY OF RECOVERED EXPLOSIVES OBTAINED FROM


DELABORATED MUNITIONS
MAJA MATOVI
Prva Iskra Namenska a.d., Belgrade, maja528@yahoo.com
LJILJANA BUNDALO
Prva Iskra Namenska a.d., Belgrade, ljiljabun@gmail.com

Quality of explosives obtained with technology based on delaboration and recovery of very wide variety of explosives, has
been represented in this paper. This (patent protected) recovery technology has been developed in Prva Iskra-Namenska
company, based on own solutions, boasting safety and being in compliance with modern ecological demands, without byproducts that might pollute the environment. Quality has been determined using different kind of analysis, analysis such as:
high pressure liquid chromatography, Fourier transformation infrared spectrophotometry, method for determination of
melting point and others.
One of the acknowledgement of quality, of these kind of products, is that explosives obtained with these recovery technology
satisfy requirement of European and World Standards. The greatest conformation of quality is the fact that most of these
products is for Ministry of Defense of the Republic of Serbia and Ministries of Defense of foreign countries.
Keywords: delaboration and recovery technology, quality of HMX and RDX, HPLC, FTIR, melting point

1. INTRODUCTION

Technology that has been developed in PIN company,


allows delaboration and recovery of very wide range of
explosives from munitions items [5].

Energetic materials and munitions are used across


Department of Defense (DoD) in mission critical
applications such as rockets, missiles, ammunition, and
pyrotechnic devices. In these applications, energetic
materials and munitions must perform as designed to ensure
success in both training and combat operations. There are,
however, potential environmental, occupational safety and
health risks associated with these materials. Mitigating these
risks throughout the life-cycle of these materials is costly
and time-consuming.

This technology demonstrated a new process of extraction


and recovery of HMX and RDX using organic solvents.
Recovering process involves solubilizing the binder through
the use of an organic solvent or hot water, then separating
the explosives from the binder solution by centrifugation.
The recovered HMX (r-HMX) and RDX (r-RDX) are high
purity, at the high yield and have melting points comparable
with pure HMX and RDX.

Most used explosives, such as octogen (HMX) and hexogen


(RDX), can be recovered from old ammunition and can be
reused as a base components for new compositions in an
economical way. Many munitions items (including
projectiles, missile and torpedo warheads, and strategic
rocket motors) contain significant quantities of these
energetic compounds.

2. EXPERIMENTAL METHODS
In this paper are presented chemical and physical properties
of r-HMX and r-RDX, and as such they met the
requirements of the standards. The characterization of the rHMX, r-RDX and their mixture has been performed in the
laboratories of the PIN company [6-9]. Samples that were
used for analysis are gained in PIN company, after
recovering from old munitions.

Neither open burning nor open detonation is a viable


alternative for the disposal of these energetic materials due
to the environmental unacceptability of the emissions
generated when these materials are destroyed. Also, the
value of these materials is lost when they are destroyed.

Sample (Table 1) is prepared simply. Dusting of the sample


in an agate mortar.

Current open burning/open detonation (OB/OD) destruction


practices release hazardous pollutants into the environment
and negate the value of these products [1,2]. This is the
reason more to implement modern recycling techniques and
reuse explosives from the abundant ordinance [3,4].

Table 1. Samples mixture of r-RDX and r-HMX


sample number
1
2
3
4
5
6

Company "Prva Iskra Namenska" (PIN) has developed a


system for safe recycling or reuse of individual components
and an environmentally safe waste stream for those
components that cannot be reused.
649

r-RDX, %
100.00
98.85
97.94
95.54
94.82
91.95

r-HMX,%
0.00
1.15
2.06
4.46
5.18
8.05

OTEH 2016

QUALITYOFEXPLOSIVESOBTAINEDFROMDELABORATIONANDRECOVERY

7
8
9
10
11
12

87.61
79.54
74.66
60.10
48.06
0.00

3. RESULTS AND DISCUSSION

12.39
20.46
25.34
39.90
51.94
100.00

3.1. Identification of explosives


The procedure of preparation and identification of
explosives are defined. For sample (Table 1) recording it is
used Attenuated total reflection (ATR). Attenuated total
reflection (ATR) is a sampling technique used in
conjunction with infrared spectroscopy which enables
samples to be examined directly in the solid or liquid state
without further preparation ATR.

2.1. Identification of explosives


The purity and identification of explosives (r-HMX and rRDX) were determined using the Fourier transformation
infrared spectrophotometer, FTIR, in wave length interval from
4000 cm -1 to 600 cm -1, by the Attenuated total reflection
(ATR). The recording is an area of (600 to 2200) cm -1.

The presence of RDX in HMX is determined by observing


the characteristic lines in the area from 770 to 1080 cm 1.
The presence of HMX in RDX is confirmed by
identification of characteristic lines on FTIR spectra in the
area from 1010 to 1200 cm-1[8].

2.2. Purity of explosives


Purity of r-HMX and r-RDX has been determinate by High
performance liquid chromatography, HPLC Ultimate 3000
manufactured by Thermo, USA.
2.3. Melting point of explosives
Melting point of each sample were evaluated in capillary
melting point indicator, OPTI- MELT, per conditions listed
below.
Determination of melting point was performed on samples
dried on the temperature of 80 C during 1 hours. After that
and before putting the sample in the instrument, they were
cooled, in desiccators to the room temperature,
approximately during 30 minutes.

Picture 1. FTIR spectrum of r- RDX, sample 1

2.4. The acidity of explosives


The acidity of pure explosives is determined by
potentiometric titration.
A device for determining the acidity is Mettler Toledo, G20
Compact Titrator.
The method is based on the dissolving of the explosive with
a suitable solvent, precipitation with distilled water and then
direct titration.
2.5. Insoluble matter in acetone

Picture 2. FTIR spectrum of mixture r-HMX/r-RDX,


sample 2-6

This method is based on dissolving the sample in to filter


crucible and measuring the insoluble residue.
This parameter indicates the presence of impurities in the
sample as well as the possible presence of sandy materials.
2.6. Impact test
The impact test gives a reproducible value of the sensitivity
to shock of an explosive provided. It is carried out by an
international standard [9].
Apparatus for impact test is consisting of a vertical sliding
plane along which a weight (usually 2 kg) is dropped from a
given height. The falling weight strikes against a steel anvil,
supporting a pistol assembly which contains a sample of
explosive.

Picture 3. FTIR spectrum of mixture r- HMX/r-RDX,


sample 7-11

The lowest falling height at which explosive detonates, is


the falling weight value.
650

OTEH 2016

QUALITYOFEXPLOSIVESOBTAINEDFROMDELABORATIONANDRECOVERY

then through a syring filter Nylon 0.22m. Then dilute 0.2


ml of solution to 10.0 ml of acetonitrile.

Picture 4. FTIR spectrum of r- HMX, sample 12

Picture 6. HPLC Chromatogram of r- RDX, sample 1

The presence of r-HMX in r-RDX can be determined based


on spectral lines that occur in the area of 1010 to 1200 cm-1,
leather can be seen in the spectra shown in Picture 1 - 4.
Most important thing is of course and modification of
HMX in samples. HMX must not be modification,
because as such he is unstable.
Table 2. Characteristic peaks for and modification of
HMX
Maximum peak, wavenumber
Modification of HMX
(cm-1)
3030, 1575, 1545, 1455, 1415,
1394, 1368, 1323, 1270, 1218,
modification
1111, 1090, 1034, 947, 919, 366,
349, 768, 754, 742, 714
2999, 1575, 1460, 1432, 1385,
1348, 1280, 1239, 1202, 1148,
modification
1000, 968, 949, 919, 864, 832,
773, 762

Picture 7. HPLC Chromatogram of r- HMX, sample 12

Picture 8. Chromatogram of mixture of r- RDX and rHMX (sample 2-11)


Picture 5. FTIR spectrum of and modification

Based on these results we can see that r-RDX and r-HMX


obtained by the delaboration, are in accordance with the
requirements of national and international standards [6-8,
11-13].

pure HMX
The difference in the crystalline modification of HMX
compared to HMX can be seen on the basis of
characteristic lines that occur in the spectral area from 1280
to 800 cm-1 (Picture 5). In the samples of r-HMX, obtained
by the delaboration, not one was indentified as
modification.

3.3. Melting point

3.2. Purity of explosives

The procedure of preparation and characterization of


explosives are defined. The sample (Table 1) is first
pulverized in an agate avenue, such that it becomes a fine
powder.

The procedure of preparation and characterization of sample


(Table 1) are defined. Accurately weigh approximately
0.1000 g sample, dissolve in 25.0 ml acetonitrile on an
ultrasonic bath. Filter through the white ribbon filter, and

The glass capillaries are filled, so that the open end of the
capillary placed in the sample and then tapping on a hard
surface or light compacting pusher, push the sample
immersed in the capillary end.
651

OTEH 2016

QUALITYOFEXPLOSIVESOBTAINEDFROMDELABORATIONANDRECOVERY

The optimum charge level of the capillary is 2-3 mm.

Presence of r-RDX in recovered explosives (mixture of rRDX and r-HMX) can be determined by using the graphic
on Picture 9.

Full determination requires three capillaries from the same


sample. After completing the program, the melting
temperature, in oC, is read out from the display.

3.4 The acidity of explosives

Samples mixture of r-RDX and r-HMX and conditions of


melting point determination are presented on Table 3.

Accurately weigh approximately 10.0000 g sample, in a


glass of 400 ml, or 800 ml. Then dissolve in 100.0 ml
acetone (r-RDX), or 500.0 ml acetone (r-HMX) on an water
bath. When sample were dissolved, add 100.0 ml distillated
water to precipitate explosives. Sample must cool down on
room temperature, and then titrate with 0.1M sodium
hydroxide solution with appropriate indicator. For RDX
indicator is mixture of methyl red and methyl blue, and for
HMX indicator is phenolphthalein.
The acidity of recovered explosive samples are presented on
Table 5.

Table 3. Samples and conditions of determination


Conditions of
r-HMX
sample
numb. r-RDX %
%
determination
1

100.00

0.00

2
3
4
5
6
7
8
9
10
11
12

98.85
97.94
95.54
94.82
91.95
87.61
79.54
74.66
60.10
48.06
0.00

1.15
2.06
4.46
5.18
8.05
12.39
20.46
25.34
39.90
51.94
100.00

Tstart C Tstop C

190
190
190
190
190
185
185
185
185
185
200

210
210
210
210
210
210
210
210
210
210
300

heating rate

C/min

1.0
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.8
0.8
20.0

Table 5. Acidity of samples


Sample number
1
2
3
4
5
6
7
8
9
10
11
12

The results of detemination are presented on Table 4 and


Picture 9.
Table 4. Melting points of mixture r-RDX and r-HMX
Sample number
1
2
3
4
5
6
7
8
9
10
11
12

Melting point (C)


204.8
202.4
200.7
198.7
197.8
195.0
190.3
188.5
188.2
189.4
191.0
284.5

Acidity
0.0003
0.0003
0.0004
0.0005
0.0005
0.0005
0.0002
0.0004
0.0003
0.0005
0.0003
0.0005

As seen on the basis of results displayed in the Table 5, the


contents of acidity is in accordance with the requirements of
national and international standards
[6-8, 11-13].
3.5. Insoluble matter in acetone
Accurately weigh approximately 10.0000 g sample, in to
filter crucible and dissolve with 100.0 ml acetone heated on
temperature 56C. After dilution filter crucible dried in a
laboratory oven at about 100 oC, to constant weight. From
the weight difference, before and after the dissolution
calculate the percentage of insoluble residue in sample.
These samples not contain dissolved substances which is in
accordance with the requirements of national and
international standards [6-8, 10-13].
3.6. Impact test
The sensitiveness to impact of recovered RDX and HMX
are presented on Table 6.
Table 6. Results of Impact test for r-RDX and r- HMX
Amount of
Sample
Height
Energy, J Strike
sample, g
r- HMX
25 cm
0.033
5
No
r- RDX
25 cm
0.033
5
No
Based on the experimental results shown in the Table 6, it
can be seen that the values obtained are in accordance with
the sensitiveness to impact of pure RDX and HMX.

Picture 9. Melting point curve of mixture r-RDX and


r-HMX

652

QUALITYOFEXPLOSIVESOBTAINEDFROMDELABORATIONANDRECOVERY

OTEH 2016
meaning of Article 16(1). of Directive 89/391/EEC)
2000. Official Journal of the European Communities L
23/57, 1999.
[3] C. Branco, H. Schubert, J. Campos, Defense Industries:
Science and Technology Related to Security Impact
of Conventional Munitions on Environment and
Population, Springer, Heidelberg, 2007.
[4] Sara Poehlein, Caroline Wilharm, Keith Sims and Dan
Burch: Recovery and reuse of HMX/RDX from
propellants and Explosives, 2002.
[5] www.prvaiskra-namenska.com/delaboration
[6] SORS 1130/97 Heksogen, Ciklotrimetilentrinitramin,
[7] SORS 7572/97 Oktogen, Ciklotetraamintetranitramin,
[8] SORS 8457/96 Metode kontrolisanja brizantnih
eksploziva
[9] Explosives for Civil Uses High Explosives Part 4:
Determination of Sensitiveness to Impact of
Explosives, BS EN 13631- 4 : 2002, British Standards
Institution, London, United Kingdom, 2002.
[10] 20395- 74
[11] 84- 1344- 76
[12] MIL- DTL- 398
[13] MIL- DTL- 45444B

4. CONCLUSION
Based on these results, we can see that the explosives
obtained in this way, delaboration and recovery, meet the
requirements of the standard of Republic of Serbia and
standards foreign countries such as MIL and GOST [1013].
It enables the reuse of r-RDX and r-HMX which otherwise
will be destroyed by open burning and open detonation, and
emission caused by OB/OD will be minimized. Also, the
solvent waste streams generated by process are recovered.
Most important thing is, that with this recovering process,
we will provide a source of HMX and RDX so that the new
explosives will not have to be manufactured.
Reference
[1] DIRECTIVE NUMBER 4715.11 Environmental and
Explosives Safety Management on Operational Ranges
Within the United States, Department of Defense,
2007.
[2] DIRECTIVE 1999/92/EC OF THE European
parliament and of the Council on minimum
requirements for improving the safety and health
protection of workers potentially at risk from explosive
atmospheres (15th individual Directive within the

653

THE STRENGTH INVESTIGATION OF


SPECIFIC POLYMERIC COMPOSITE ELEMENT/METALLIC ELEMENT
JOINT REALIZED BY PINS
SLOBODAN ITAKOVI
Military Technical Institute, Belgrade, nadac@list.ru
JOVAN RADULOVI
Military Technical Institute, Belgrade, jovan.r.radulovic@gmail.com

Abstract: In this paper, investigation results of tensile testing of specific jointed assembly of two elements, realized by pins,
are presented. One element is specific composite tube, produced by filament winding technology using glass fiber
impregnated with polyester resin, and another element is cylindrical steel part, obtained by standard machining technology.
An assembly specific composite part/metallic part, jointed by pins, was exposed to the action of axial tensile stress. A tensile
breaking force of mentioned assembly is determined. An influence of pins number to tensile breaking force of mentioned
joint is presented. A failure mechanism of specific composite tubes is described.
Keywords: Polymeric composites, filament winding technology, tubes, joining, pins, tensile characteristics.
dimensional stability during the operational lifetime and,
also, oer the minimum weight material solution for these
structures.

1. INTRODUCTION
For a production of an assembly elements different
materials can be used (polymeric materials, metal materials,
wood, ceramic, glass, etc.).

2. COMPOSITE MATERIALS

Plastics materials, in shortest, can be divided into


thermosetting materials and thermoplastic materials.

Composite materials are, usually, formed of two or more


materials and are characterized by new properties with
regard to starting components.

Thermosetting materials usually are not used in neat form


for engineering and stress-demanding purposes. In most
cases, these materials are one component of composite
materials.

One of composite materials production technology


increasingly used is filament winding technology.
Characteristics of filamentwound polymeric composite
materials can be rarely found in expert literature for the
numerous reasons, but two the most important are extremely
versatility of starting materials (reinforcing agents and
impregnating agents) and technological parameters.

The term composite originally arose in engineering when


two or more materials were combined in order to rectify
shortcomings of particularly useful components [1].
Until early 1990s, the use of fiber-reinforced polymer
composites was almost limited to only aerospace and
military applications. By the mid-1990s, civil engineers
started to realize the advantages of such materials and to use
them. The introduction of composite materials in the
automotive industry, places new demands on the materials
and manufacturing processes in terms of cost, cycle time
and automation. Manufacture and assembly of composite
structures require knowledge of reliable joining techniques.

Based on theoretical considerations and practical experience


on investigation and development of polymeric composite
materials obtained by filament winding technology, it is
known that characteristics of mentioned materials depend of
reinforcing agent and impregnating agent and, also, of
technological parameters of production process [2].
Reinforcing agents can be polymer-based (carbon fibres,
graphite fibres, aromatic polyamide fibres (aramide),
novoloid fibres, etc) and no polymeric (glass fibres) ones.

High performance composite materials are nowadays widely


employed in many advanced engineering fields due to
overall set of characteristics. Primarily mechanical
properties of these materials make them attractive for
structural applications where high strength-to-weight and
stiffness-to-weight ratios are required. High-performance
composites satisfy requirements for high specific strength
and specific stiness, low coecient of thermal expansion,
good fatigue resistance, high damping properties,

Impregnating agents are mostly polymeric (polyester resins,


epoxy resins, phenolic resins and other thermosetting
resins). It is considered that there are more than 5.000 of
composite polymeric materials [3].
Filamentwound composite materials have unique set of
specific properties.

654

THESTRENGTHINVESTIGATIONOFSPECIFICPOLYMERICCOMPOSITEELEMENT/METALLICELEMENTJOINTREALIZEDBYPINS

Considering extremely versatility of used materials and


technological parameters (especially orientation, number
and sequence of wound layers), almost, every
filamentwound product has unique properties, especially
those build in complex assemblies [4].

OTEH2016

catastrophic failure mode of mechanically fastened joint [9]


No catastrophic failure mode of mechanically fastened joint
is so called bearing failure mode, presented at Figure 1.

In this paper, investigation results of tensile testing of joint


between specific polymeric composite element and metallic
element, realized by pins, are presented. Specific polymeric
composite element is tube, produced by filament winding
technology of glass fiber impregnated by polyester resin,
and metallic element is steel cylinder, produced by standard
machining.

Figure 1. Bearing failure mode


Catastrophic failure mode of mechanically fastened joint
can be associated with fastener (so called bolt shear,
presented at Figure 2) and composite material.

3. MECHANICL FASTENING
The assembly elements can be jointed in different ways, in
order for the produced assembly to fulfill its function.
In composite structures, three types of joints are commonly
used, namely : mechanically fastened joints, adhesively
bonded joints, and hybrid mechanically/adhesively bonded
joints

Figure 2. Bolt shear failure mode

Mechanical joining falls into two distinct groups: fasteners


and integral joints. Examples of fasteners include: nuts,
bolts, screws, pins and rivets; examples of integral joints
include: seams, crimps, snap-fits and shrink-fits.

Catastrophic failure mode of mechanically fastened joint


associated with composite material can be tension, shear-out
and cleavage, presented at Figures 3,4 and 5, respectively.

Mechanical fastening is very common procedure for


production high- or low-volume thermosetting materials
assemblies. Mechanical fasteners commonly used in many
advanced engineering applications dealing with composite
materials, play main role of transferring loads between
linked structural elements [5].
Figure 3. Tension failure mode

The usage of composite is increasing in aerospace and other


engineering industries and the study of joining methods for
composite materials became an important research area. The
main objective of the mechanically fastened joint is to
transfer the applied load from one part of the joint structure
to the other through the fastener element [6].
As the applications of advanced composite structural
materials continue to increase, so does the need to
understand the mechanical behavior of mechanically
fastened joints in such structures.

Figure 4. Shear-out failure mode

Virtually every large-scale composite structure contains joints.


The use of joints is due to manufacturing constraints and to
requirements related to accessibility to the structure, quality
control, structural integrity assessment and part replacement.
Figure 5. Cleavage failure mode

Mechanically fastened joints oer a number of


characteristics that make them well suited for joining
composite laminates. For example, mechanically fastened
joints are relatively inexpensive to manufacture and can be
disassembled [7].

4. EXPERIMENTAL PART
4.1. Production of tube samples

Generally speaking, concerning joining of composite


elements, there are two possibilities:

Tubes, used in experiments, were produced by filament


winding of a glass roving trade mark R 2117 (made by glass
fibre manufacturer "ETEX", Baljevac/Ibar) impregnated by
polyester resin system trade mark DUGAPOL H 230 (made
by polyester resin manufacturer "DUGA", Belgrade) with
addittion of inhibitor trade mark TBC (produced by
chemical producer AKZO, Holland). The tubes were wound
on cylindrical mandre using the PLASTEX type PLA 500

joint between composite part and metallic part and


joint between composite part and another composite part [8].
Mechanically fastened joints are exposed to different kind
of external forces. In practice, external force can cause no
catastrophic failure mode of mechanically fastened joint and
655

THESTRENGTHINVESTIGATIONOFSPECIFICPOLYMERICCOMPOSITEELEMENT/METALLICELEMENTJOINTREALIZEDBYPINS

OTEH2016

machine (made by machine manufacturer PLASTEXMANUHRINE, France).

whose center is 25 mm from an end of tube samples, were


produced.

Winding angle is angle which reinforcing agent (fiber)


forms regarding longitudinal axe of product.

The specimen marks, the winding structure (from the inside


toward the outside), hole number, internal diameter, external
diameter and wall thickness of tube samples are presented in
Table 1.

By machining process, tube samples 250 mm long were cut


from cured tubes. An appropriate 5 mm diameter holes,

Table 1. Specimen marks, the winding structure, hole number internal diameter, external diameter and wall thickness of
tube samples
Specimen
Winding
Hole
Internal diameter
External diameter
Wall thickness
marks
structure
number
(mm)
(mm)
(mm)
4P
4
64,20
67,60
1,70
1 x 90o
6P
6
64,20
67,60
1,70
2 x 61 o
8P
8
64,20
67,60
1,70
1 x 90o
12P
12
64,20
67,60
1,70
For the purpose of this paper, next abbreviations will be
used:

4.2. Results of investigation and analysis


Assembly, consisting of composite tube with holes fastened
with metallic part by pins, was exposed to the action of
tensile stress in axial direction. Testing was ended when
failure of tube sample happenned. Specimen marks, hole
number, single values ( ) and arithmetic mean values and
standard deviations ( ) of obtained tensile breaking
force, are presented in Table 2.

Specimen 4P are assembly consisting of tube with four


holes fastened with testing tool by 4 pins,
Specimen 6P are assembly consisting of tube with four
holes fastened with testing tool by 6 pins,
Specimen 8P are assembly consisting of tube with four
holes fastened with testing tool by 8 pins,

Specimen 4P, specimen 6P, specimen 8P and specimen 12P,


after investigation of tensile breaking force, are presented at
Figures 6, 7, 8, and 9, respectively

Specimen 12P are assembly consisting of tube with four


holes fastened with testing tool by 12 pins,

Table 2. Specimen marks, hole number,single values, arithmetic mean values and standard deviations of tensile
breaking force
Tensile breaking force (kN)
Specimen
Hole
Arithmetic mean values and
mark
number
Single values ( )
standard deviation ( )
4P
4
8,75
9,45
8,95
9,03 0,33
6P
6
15,05
14,00
14,10
14,38 0,58
8P
8
18,50
19,15
18,30
18,65 0,44
12P
12
27,20
25,60
26,00
26,27 0,83

Figure 7. Specimen 6P after testing

Figure 6. Specimen 4P after testing


656

THESTRENGTHINVESTIGATIONOFSPECIFICPOLYMERICCOMPOSITEELEMENT/METALLICELEMENTJOINTREALIZEDBYPINS

Something different situation can be seen at Figure 7.


After completed action of tensile force in axial direction,
a failure of tube with six holes in axial direction and in
radial direction is observed. Failure of specimen 6P in
radial direction between six pins is small but exist.

As can be seen at Figure 6, after an action of tensile


force, a failure of tube with four holes, only happenned in
axial direction and no demage between pins at specimen
4P is observed.

Figure 8. Specimen 8P after testing

Figure 9. Specimen 12P after testing

Failure of specimen 8P is similar to failure of specimen 6P,


because demages in axial and in radial directions are
observed. As can be seen at Figure 8, a demage of specimen
8P with eight holes between pins i.e. demage in radial
direction, is higher than at specimen 6P.

2.

At Figure 9 a completely different fact is presented. After an


action of tensile force only in axial direction at specimen
12P with twelve holes, demage between pins cause
catastrophic failure of tube. Specimen 12P is catastrophically demaged at position between pins and part of tube
above pins i.e. ring is separated from remaining part of tube.

3.

Data presented at Table 2 pointed out that specimen 4P i.e.


assembly consisting of tube with four holes fastened with
metallic part by 4 pins have the lowest tensile breaking
force (9,03 0,33 kN).

4.

5.

Tensile breaking force of specimen 6P i.e. assembly


consisting of tube with six holes fastened with metallic part
by 6 pins is (14,38 0,58) kN.
Arithmetic mean values and standard deviations of tensile
breaking force of specimen 8P i.e. assembly consisting of
tube with eight holes fastened with metallic part by 8 pins is
(18,65 0,44) kN

6.

Specimen 12P i.e. assembly consisting of tube with twelve


holes fastened with metallic part by 12 pins have the highest
tensile breaking force (26,27 0,83 kN).

parts, fastened by different number of pins, are


investigated.
Number of pins has positive influence at tensile
breaking forces of specific polymeric composite
part/metallic steel part assembly fastened by mentioned
joining element. Specimen with four pins have the
lowest tensile breaking force i.e. (9,03 0,33 kN) and
specimen with twelve pins have the highest tensile
breaking force i.e. (26,27 0,83 kN)
Pure cleavage failure mode was observed after testing
specimen 4P i.e. assembly consisting of tube with four
holes fastened with machined steel part by 4 pins.
Specimen 6P i.e. assembly consisting of tube with six
holes fastened with machined steel parts by 6 pins
showed mixed cleavage-tension failure mode.
Mixed tension-cleavage failure mode was observed
after testing specimen 8P i.e. assembly consisting of
tube with eight holes fastened with machined steel part
by 8 pins.
Pure tension failure mode was observed after testing
specimen 12P i.e. assembly consisting of tube with
twelve holes fastened with machined steel part by 12
pins.

References
[1] Radulovi J., Filament Wound Composite Plastic
Tubes: Relationship Between Winding Structures nd
their Hydraulic nd Mechanical Properties,,Scientific
Technical Review, 2011,Vol.61,No.3-4,pp.73-77
[2] Radulovi J., Characterization of Filamentwound
Polymeric Composite Materials, Scientific Tehnical
Review, Vol LVIII, No 1, p 66-75, Belgrade, 2008.

5. CONCLUSIONS
Based on presented data, it can be concluded:
1.

OTEH2016

Tensile breaking forces of assemblies consisting of


filamentwound composite tubes and machined steel
657

THESTRENGTHINVESTIGATIONOFSPECIFICPOLYMERICCOMPOSITEELEMENT/METALLICELEMENTJOINTREALIZEDBYPINS

[3] Ashby F.M., Materials selection in mechanical design,.


Butterworth-Heinemann, Oxford, 1999.
[4] Radulovi J., Influence of Internal Cyclic Pressure on
Filamentwound Composite Tube Quality, Scientific
Tehnical Review, Vol LX, No 1, p 54-60, Belgrade,
2010.
[5] Pisaro A.A. and Fuschi P., Mechanically fastened
joints in composite laminate > Evaluation of Load
Bearing Capacity, Composites : Part B, 42, 2011 pp
949-961.
[6] Wankhade A.P. and Jadhao K.K., Design And
Analysis Of Bolted Joint In Composite Laminated,

OTEH2016

International Journal Of Modern Engineering


Research, Vol. 4, Iss. 3, Mar. 2014, pp 20-24, ISSN:
22496645.
[7] Camanho P.P. and Lambert M., A design
methodology for mechanically fastened joints in
laminated composite materials, Composites Science
and Technology , Vol. 66, 2006, pp 30043020.
[8] Gay D., Composite Materials, Design and
Appliocations, 3rd Ed.CRC Press, Taylor&Francis
Group, Boca Raton, 2015, ISBN 978-1-4665-8487-7
[9] http://www.engidesk.com/publishingarea/Mechanically-fastended-joints.aspx

658

QUALITATIVE AND QUANTITATIVE ASSESSMENT OF BOND


STRENGTH OF SOLID ROCKET PROPELLANT AND THERMOPLASTIC
MATERIAL FOR CARTRIDGE LOADED GRAIN
JOVAN RADULOVI
Military Technical Institute (VTI), Belgrade, jovan.r.radulovic@gmail.com

Abstract: The bond strength between thermoplastic extruded poly(vinyl chloride) tube and hydroxyl-terminated
poly(butadiene)-based composite rocket propellant was studied in this paper. Assessment of bond strength was carried out
quantitatively, based on strength determination using specific method and qualitatively, based on cohesion energy of tested
materials. Inhibited cartridge loaded propellant grain, after conditioning at ambient temperature, gave a successful
performance during static evaluation.
Keywords: Poly(vinyl chloride), hydroxyl-terminated poly(butadiene), bond strength, cartridge loaded grain.
PVC compounds, prepared for pipe extrusion, are a mixture
of PVC resin and a combination of stabilizers, fillers,
lubricants, pigments and modifiers [3]. As a heat and light
sensitive material, PVC degrades by dehydrochlorination
process and oxidation process. This can be seen by changing
the color of PVC. In chemical terms, formation of
conjugated double bonds causes the color change. PVC
compounds experienced heat history in ingredients, mixing
cycles and extrusion process. Oxidation products occur by
exposure to weathering.

1. INTRODUCTION
The choice of material can not be made independently of the
choice of technology by which material is obtained, part
produced and bonded with other elements of the finished
assembly.
Second half of previous century and years of the current
century up to today is considered as a period of polymeric
materials. Basically, polymeric materials consist of plastic
materials and elastomeric materials. Plastics materials,
generally speaking, can be divided into thermoplastics
materials and thermosetting materials.

2. MATERIALS AND TECHNOLOGIES

One of the most crucial additives are heat stabilizers. The


function of the stabilizer is to delay heat degradation so that
the compound can be formed into a product before it
degrades. The stabilizer functions by absorption of
hydrogen chloride, displacement of active chloride atoms,
free radical scavenging, disruption of double bond
formation, deactivation of degradation process by products,
peroxide decomposition and ultraviolet energy absorption.
Widely used heat stabilizers are organotin merkapto esters.
They give good heat and light resistance, color stability, and
promote fusion and reduce melt viscosity [4].

Poly(vinyl chloride), according to IUPAC nomenclature


poly(1-chloroethylene), and abbreviated PVC, was produced
in addition polymerization reaction from vinyl chloride
monomer by free radical mechanisms.

The most common fillers are metal carbonates, mainly


calcium carbonate coated with stearic acid. The coating
reduces the abrasiveness of the calcium carbonate and
extruder barrel and screw wear.

The product of the polymerization process is unmodified


PVC. Basically, since PVC has a high polarity and high
compatibility with a variety of additives, it is possible to
mix these ingredients easily into polymer and to obtain a
compound that can be processed by appropriate technology.
By this technique some of the shortcomings of rigid PVC
products can be modified [1].

Thermoplastics olymers are made of long chain molecules


of varying sizes and distributions. These polymers tend to
be relatively viscous and sticky above their melt
temperature.

In this paper, an investigation of a bond strength between


one thermoplastic material and one thermosetting system
was described. A thermoplastic material is extruded
poly(vinyl chloride) and thermosetting system is cast
hydroxyl terminated poly(butadiene) prepolymer cured with
isophoronediisocyanate.

Lubricants serve to decrease frictional forces found between


polymer, metal surfaces of process equipment and filler.
The external lubricant is added so the PVC can be processed
during the extrusion cycle. Without this lubricant the PVC
would stick to the metal surfaces of a barrel and screws of
the extruder. A lubricant combination, consisting of a
paraffin wax, calcium stearate and oxidized PE wax, is well
suited for PVC pipe extrusion [5].

Concerning thermoplastics materials, PVC-based


compounds can be considered as the most versatile plastics
materials to processing technologies [2]. PVC-based
compounds can be processed by extrusion, calendaring,
injection molding, impregnation, blow molding, casting, etc.

659

QUALITATIVEANDQUANTITATIVEASSESSMENTOFBONDSTRENGHTOFSOLIDROCKETPROPELLANTAND

There are three main reasons to use pigments in PVC


compounding: to achieve opacity in non-weatherable
compounds, for UV protection in weatherable compounds
and to achieve a given color. Titanium dioxide is the major
pigment used.

OTEH2016

may also include burn-rate modifier, plasticizer, antioxidant


and bonding agent [11].
The binder (polymer matrix), as the name implies, holds the
composition together and acts as an auxiliary fuel. Once
cured, the binder makes the propellant flexible, which
decreases the likelihood that the propellant will fracture
under stress and pressure. The binder comprises at least two
components. The first one is a liquid prepolymer and the
second one is a curing agent. Polyurethane-based binder
systems (hydroxyl-functional prepolymers, such as
hydroxyl-terminated polybutadiene, HTPB, cured using
multifunctional isocyanates) are extensively used in
composite solid propellants, due to convenient reaction
conditions and relative lack of adverse side reactions [12].

In colored PVC compounds a mixture of rutile type titanium


dioxide and other appropriate pigments is used for obtaining
the demanding color.
Impact modifiers, such as acrylic polymers, are used in
weatherable PVC products. They allow good retention of
properties with good color retention of extruded PVC
elements. During an extrusion process it is important to use
the proper melt temperature which will cause a fusing of an
impact modifier. By this procedure the full properties of the
impact modifier will be realized.

A major component in propellant, by weight and volume, is


an oxidizer, which produce the high energy on combustion.
One of the most commonly used oxidizers is ammonium
perchlorate (AP) due to its relatively high availability,
relatively low cost, high energy, ability to oxidize
commonly available fuels and variable burn rate. The
multimodal combination of AP particles provides optimum
balance between exposed oxidizer surface area and packing
fraction, both of which impact burn rate [13].

Extrusion is a compression process in which material is


forced to flow through a die orifice to provide long
continuous product whose cross-sectional shape is
determined by the shape of the orifice [6].
Compound for PVC pipe extrusion consists of PVC K67,
pigments, calcium carbonate, organotin merkapto esters,
paraffin wax, calcium stearate, oxidized PE wax and
acrylics.

The metal particles as added as a fuel. Preferably, aluminum


is utilized, added in the form of fine particles.

In shortest, during polymer extrusion, feedstock i.e.


compound (in pellet or powder form) is fed in an extrusion
barrel where it is heated and melted and forced to flow to
the front of barrel by means of a rotating screw. At the front
of the barrel, the molten plastic leaves the screw and travels
through a wire meshes and breaker plate. After passing
through the breaker plate molten plastic enters the die,
which gives the final product profile. The cooling of a
product is usually achieved by pulling the extrudate through
a water bath. In a sealed water bath PVC pipe is cooled by
water in liquid or chilled state [7,8,9].

Burn-rate modifier accelerate (burn-rate catalyst) or


decelerate (burn-rate depressant) the combustion of the
reaction as desired. Iron oxide increases the burning rate,
while lithium fluoride decreases the burning rate. Recently,
several experimental results show that the catalytic activity
of nano sized catalyst remarkably increased as compared to
their bulk size.
A plasticizer is a relatively low-viscosity organic liquid,
which remarkably improves the processing properties of the
propellant and extend pot life. Most commonly used
plasticizers
are
DOA
(dioctyladipate),
DOS
(dioctylsebacate) and IDP (isodecylpelargonate) [14].

Grain can be manufactured, mostly, by two technologies:


extrusion and casting [10].
Extrusion is mentioned earlier in this paper and can be used
for grain production of smaller, limited size.

Not all of the hydroxyl bonding sites in the prepolymer used


to form the binder are exhausted during crosslinking.
Accordingly, the composite propellant is subjected to
oxidative hardening and other contaminant reactions during
storage. Antioxidants should be added to prevent oxidative
hardening and one of the most used is 2,2-methylene-bis-(4methyl-6-tert-butylphenol) [12].

In polymer shaping, casting involves pouring of a liquid


resin into a mould, using gravity to fill the cavity and
allowing a polymer to harden.
Both thermoplastic materials (acrylates, polyamides,
polystyrenes, etc.) and thermosetting materials
(polyurethanes, unsaturated polyesters, phenolic and epoxy
resins) can be produced by casting technique.

The adhesion between filler particles and the polymeric


matrix and hence the mechanical properties of these
propellants may be improved by the use of surface active
agents called ''bonding agents'' [15]. A bonding agent
produces an interaction between the oxidizer particles and
the polymeric binder by forming either primary or
secondary bonds with the oxidizer (by means of adsorption
and attraction) and a primary bond with a binder. Bonding
agents have been developed and they are typically used in
(HTPB)-based composite propellants since these polymers
are weakly polar [16].

The ingredients of the composite propellant are admixed.


Generally, mixing involves mechanically blending the
components at elevated temperature. Shortly speaking, in
casting technology ingredients are mechanically mixed, cast
and cured. The casting process involves pouring the liquid
homogenized thermosetting system into a mould so that
cross-linking occurs.
In a broad sense, hydroxyl-terminated poly(butadiene)based composite rocket propellant could be concerned as a
weakly crosslinked thermosetting polyurethane system.
Composite propellants consist primarily of an organic
binder, inorganic oxidizer and metal fuel. The propellant

One of the most significant challenges in rocket motor


design and construction is to develop an effective, wellengineered propellant inhibitor.
660

QUALITATIVEANDQUANTITATIVEASSESSMENTOFBONDSTRENGHTOFSOLIDROCKETPROPELLANTAND

There are primarily two objectives of using the inhibitor


with propellant. It ensures no burning of inhibited area and
thus leads to controlled burning (end burning, internal
burning and external burning) as per requirement. Inhibitor,
also protects the rocket case which otherwise would have
been exposed to hot propellant gases [17].
An inhibitor is simply a layer of heat resistant material that is
bonded to one (or more) surfaces of a propellant grain and has
a sole purpose of preventing combustion from occurring on that
particular surface. There is an intimate relationship between the
total burning surface (how it changes throughout the burning
duration) and the rocket motor internal ballistic characteristics
(pressure, burn rate, thrust). It must be noted that the inhibitor
represents a critical component of the rocket motor system and
the choice of inhibitor requires a careful balance of reliability
and efficient use of limited resource [18].

OTEH2016

internal cylindrical area and flat lower area. The internal


cylindrical surface of pusher, which is in direct contact with
the external surface of rocket propellant, has a radius which
is equal to the external radius of rocket propellant. Lower
flat area of the pusher is in direct contact with an upper flat
area of the square part of the PVC. Pusher, positioned on
described manner, is acting on rocket propellant-extruded
PVC bond, under the influence of compressive stress from
upper side.

Properties that are required for an effective inhibitor are:


good bonding characteristics, low thermal conductivity,
resistance to thermal degradation, resistance to creep in the
presence of gas flow, low cost and availability.

3. EXPERIMENTAL PART
Compound for PVC pipe extrusion consists of PVC K67,
pigments, calcium carbonate, organotin merkapto esters,
paraffin wax, calcium stearate, oxidized PE wax and
acrylics. Production of extruded PVC pipe was done on twin
screw counter rotating extruder with cooling system with
vacuum bath RWN 1 produced by Cincinnati-Milacron,
Wien, Austria.

Figure 1. Sample for determining rocket propellantextruded PVC bond strength


Described testing parameters are similar to those presented
in standard for determining a shear properties of bond [20].

The propellant used for investigation in this paper consists


of a liquid phase and a solid phase. The liquid phase
consists of prepolymer butadiene with hydroxyl end groups
(HTPB) and a curing agent isophorone with two isocyanate
groups (IPDI), dyoctyl adipate, phenyl--naphtylamine and
triethylenetetramine. The solid phase consists of an
inorganic filler (AP) and a metal fuel (aluminum). The
propellant formulation was mixed in a 1-gallon BakerPerkins planetary mixer.

Sample 1 just before the end of a testing of rocket


propellant-extruded PVC bond strength is shown in Fig.2.

Production of bond between rocket propellant and


thermoplastic tube was done using casting technology by
direct pouring of homogenized liquid and solid phase of
propellant into extruded PVC pipe. The curing of the cast
propellant was performed according to appropriate
procedure [19]. After a curing of propellant, by machining a
part of PVC pipe is removed and by this procedure a sample
for bond strength investigation between thermoplastics
material and composite propellant was obtained.

4. RESULTS
A sample, prepared by the procedure, described in
paragraph 3 of the paper, used for determining rocket
propellant-extruded PVC bond strength, is shown in Fig.1.

Figure 2. Sample 1 just before the end of testing of rocket


propellant-extruded PVC bond strength

In Fig.1 a grey cylinder is rocket propellant (external


diameter 49,6 mm), yellow square parts are pieces of
extruded PVC (dimensions 10 mm x 10 mm) and the black
rectangular part is special pusher. Rocket propellantextruded PVC bond is exposed to the compressive stress
from upper side using special pusher. The pusher has

Determining of rocket propellant-extruded PVC bond


strength was carried out at 20 and 50.
Bond mark (BM), single values ( xi ) and arithmetic mean
value with standard deviations
661

( x ) of the rocket

OTEH2016

QUALITATIVEANDQUANTITATIVEASSESSMENTOFBONDSTRENGHTOFSOLIDROCKETPROPELLANTAND

propellant-extruded PVC bond strength for sample 1

determined at 20, is shown in Table 1.

Table 1. Rocket propellant-extruded PVC bond strength for sample 1 determined at 20


BM
1
2
3
4
5
6
7
( ) (N)
72.6
73.0
62.2
66.8
67.0
83.0
69.0
76,1 7,0
( ) (N)
Table 1. Continued
BM
10
( xi ) (N)
78.2

11
82.0

12
81.8

13
85.0

( x ) (N)

15
87.0

16
76.2

9
81.8

17
71.8

18
75.2

76,1 7,0

Bond mark (BM), single values ( xi ) and arithmetic mean


value with standard deviations

14
80.8

8
75.6

propellant-extruded PVC bond strength for sample 2


determined at 20 is shown in Table 2.

( x ) of the rocket

Table 2. Rocket propellant-extruded PVC bond strength for sample 2 determined at 20


BM
1
2
3
4
5
6
7
8
9
(N)
x
( i)
80.6
76.8
62.6
74.2
79.6
78.2
77.2
72.0
73.8

( x ) (N)

10
78.6

11
85.3

12
72.6

75,9 5,6

Bond mark (BM), single values ( xi ) and arithmetic mean

propellant-extruded PVC bond strength for sample 2


determined at 50 is shown in Table 3.

value with standard deviations ( x ) of the rocket

Table 3. Rocket propellant-extruded PVC bond strength for sample 2 determined at 50


BM
1
2
3
4
5
6
7
8
9
( xi ) (N)
68.0
64.4
61.0
64.4
60.0
58.6
68.6
71.8
68.4

( x ) (N)

10
62.4

11
57.2

12
62.4

64,0 4,4

Arithmetic mean values of rocket propellant-extruded PVC


bond strength for sample 1 and 2, determined at 20, are
76,1 N and 75,9 N, respectively.

way that one part holds on a certain quantity of other part


i.e. that bond remains non-demaged and that one of part
be separated in two pieces.

Arithmetic mean values of rocket propellant-extruded PVC


bond strength for sample 2, determined at 50 is 64,0 N.

In the first case, it speaks about the adhesive way of part


separation, and in second about the so called cohesive way
of partial separation. In former, adhesion force between
bonded parts is lower than the cohesive energy of one
material and than the cohesive energy of another material.
In the latter, adhesion force between bonded parts is higher
than the cohesive energy of one of bonded materials.

By comparison of obtained data for sample 2 bond strength,


determined at temperatures 20 and 50, it can be seen
that this parameter decrease with increasing temperature.
The same reduction of bond strength can be seen for sample
1, tested at temperature 20 and data for sample 2, tested
at temperature 50. In both cases, reduction of bond
strength is about 15 %.

The separated parts of rocket propellant and extruded PVC


after testing at 20 (a) and 50 (b) are shown in Fig.3.
By visual inspection of separated parts it can be seen that on
part of the extruded PVC there is a piece of rocket
propellnt, and that on the rocket propellant surface, which
was in direct contact with PVC, a piece is missing. Part of
rocket propellant, which is on separated PVC part, by size
and configuration completely respond to a part which is
missing on the separated surface of rocket propellant.
Separation within rocket propellant happened near
interphase of separated parts and in a same manner at
temperature 20 and at temperature 50. This fact is
determined at all tested bonds of all samples at all
temperatures.

Presented number values are basis for quantitative


assessment of bond nature between rocket propellant and
extruded PVC.
Visual inspection of the separated parts of the bond between
rocket propellant and extruded PVC represents a basis for
qualitative assessment of mentioned bond.
Concerning qualitative assessment of bond nature, in
principle, there are two possibilities :
first is that separation of bonded parts happened at
interphase i.e. at surface at which parts were bonded and
that separate parts remained compact and
second is that separation of bonded parts happened in a

662

QUALITATIVEANDQUANTITATIVEASSESSMENTOFBONDSTRENGHTOFSOLIDROCKETPROPELLANTAND

OTEH2016

50, is 76,0 N and 64,0 N, respectively.


4.

Concerning qualitative assessment, a cohesive


separation between rocket propellant and extruded PVC
is registered. Separation of bonded parts did not
happened at interphase already within rocket
propellant. On separated extruded PVC surface there is
a part of rocket propellant, and on the rocket propellant
surface, which was in direct contact with PVC, an
appropriate piece is missing. This fact is determined on
tested samples at all temperatures.

5.

Based on the presented data, it can be stated that


qualitative assessment of rocket propellant-extruded
PVC bond nature, at the least, is equally important as
quantitative, pure number, assessment.

References
[1] David F. Cadogan and Christopher J. Howick
"Plasticizers" in Ullmann's Encyclopedia of Industrial
Chemistry 2000, Wiley-VCH, Weinheim. doi:
10.1002/14356007.a20_439
[2] BRYDSON J.A., Plastics Materials, NewnessButterworths, London, 1975.
[3] Handbook of PVC Pipe Design and Construction, Fifth
Edition, December 2012, Published by INDUSTRIAL
PRESS, INC. 989 Avenue of the Americas New York,
NY 10018.
[4] http://w.patents.com/us-4058543.html
[5] http://sasolwwwax.us.com/pvc.html
[6] Oberg, Erik; Jones, Franklin D.; Horton, Holbrook L.;
Ryffel, Henry H. (2000), Machinery's Handbook, (26th
ed.), New York: Industrial Press, ISBN 0-8311-2635-3
[7] Todd, Robert H.; Allen, Dell K.; Alting, Leo (1994),
Manufacturing Processes Reference Guide, Industrial
Press Inc., ISBN 0-8311-3049-0,
[8] Groover
M.P.,
Fundamentals
of
Modern
Manufacturing, Materials, Processes and Systems,
John Wiley & Sons Inc, Hoboken, 2010.
[9] Handbook of PVC Pipe Design and Construction, Fifth
Edition, INDUSTRIAL PRESS, INC. New York,
2010.
[10] Saha K.U., Jet propulsion, Solid Rocket Propellant,
propulsion/qip-jp-27www.iitg.ernet.in/Jet
Solid%20Rocket%20Propellant...
[11] BRZI, S., JELISAVAC, Lj., GALOVI, J., SIMI,
D., PETKOVI J. Viscoelastic Properties of
HydroxylTerminated
poly(butadiene)based
Composite Rocket Propellants, Hemijska Industrija,
2014, Vol.68, pp. 435443.
[12] Amtower K.P., Propellant Formulation, US Patent 7
011 722 B2, 2006.
[13] Lazi ., Ultrafini amonijumperhlorat u kompozitnim
raketnim gorivima, 18. Simpozijum o eksplozivnim
materijama, Kupari, 1990, str. 4655.
[14] SEYIDOGLU,T., BOHN,M.A.: Effect of curing agents
and plasticizers on the loss factor curves of HTPBbinders quantified by modelling, Proceedings of the
18th International Seminar on New Trends in Research

()

()
Figure 3. The separated parts of rocket propellant and
extruded PVC after testing at 20 () and 50 ()
It is obviously that cohesive separation of tested bond is a
matter i.e. that adhesive force of rocket propellant-extruded
PVC bond is higher than the cohesive energy of tested
rocket propellant.

5. CONCLUSION
Based on analysis of all data, presented in text, tables and
figures, it can be concluded:
1.

An investigation of bond strength between


thermoplastic unplasticized extruded PVC and cast
hydroxyl-terminated poly(butadiene)-based composite
rocket propellant, for cartridge loaded grain, is realised.

2.

Rocket propellant-extruded PVC bond was exposed to


the compressive stress from upper side using special
pusher.

3.

For quantitative assessment of rocket propellant-PVC


bond nature is important that bond strength at 20 and
663

QUALITATIVEANDQUANTITATIVEASSESSMENTOFBONDSTRENGHTOFSOLIDROCKETPROPELLANTAND

of Energetic Materials, Pardubice, Czech Republic,


April 15-17, 2015, pp. 794-815.
[15] Agrawal J.P., High Energy Materials: Propellants,
Explosives and Pyrotechnics, Wiley, Weinheim, 2010,
pp. 1-464.
[16] Consaga P.J., Dimethyl Hydantoin Bonding Agents in
Solid Propellants, US Patent 4 214 928, 1980.
[17] ACKLEY
A.W.,
GREENLEE
T.W.
and
GUSTAVSON C., Bonding of composite propellant
in cast-in-case rocket motors, Journal of Spacecraft
and Rockets, Vol. 3, No.3 (1996), pp 413-418.
Doi:10.2514/..28461
[18] Gordon S., Evans I.G., Jones P.G., Solid Propellant

OTEH2016

with Inhibitor Layer in Rocket Motor, US Patent 3 991


565, 1976.
[19] Brzi S. and Radulovi J., Dynamic-Mechanical
Investigation of Cured Filled Polymeric System
Hydroxyl Terminated Poly(Butadiene)/Isophorone
Diisocyanate, Scientific Technical Review, ISSN
1820-0206, 2013, Vol. 63, No.4, pp.32-39.
[20] ASTM D 905, Standard Test Method for Strength
Property of Adhesive Bond in Shear by Compression
Loading, ASTM Committee f Standards,
Philadelphia, 2013.

664

SYNTHESIS OF Re/Pd HETEROGENEOUS CATALYSTS SUPPORTED ON


HMS USING SOL-GEL METHOD FOLLOWED BY SUPERCRITICAL
DRYING WITH EXCESS SOLVENT
DRAGANA PROKI VIDOJEVI
Military Technical Institute, Ratka Resanovica 1, 11030 Belgrade, Serbia, drprokic@yahoo.co.uk
SANDRA B. GLII
Faculty of Technology and Metallurgy, University of Belgrade, Karnegijeva 4, 11000 Belgrade, Serbia
ALEKSANDAR M. ORLOVI
Faculty of Technology and Metallurgy, University of Belgrade, Karnegijeva 4, 11000 Belgrade, Serbia

Abstract: Hexagonal mesoporous silicas (HMS) have found promising applications as heterogeneous catalysts support.
In this paper, HMS with Ti ions (Si/Ti atomic ratio of 40) was prepared using sol-gel method. Mesoporous texture was
assembled with dodecylamine as the structure directing surfactant (S) and TEOS and Titanium butoxide as the
inorganic precursors (I). In order to improve textural characteristics apart from the conventional air-drying,
supercritical drying with excess ethanol was applied. Obtained material was used as the support for the preparation of
Re/Pd heterogeneous catalysts. The Ti incorporation is conformed with the FT-IR spectroscopy and the morphology of
the samples is analysed using SEM.
Keywords: Mesoporous Silica, Rhenium oxide, Ti-HMS, Sol-Gel , Supercritical drying.
substituted DBT has steric hindrance to adsorb on catalyst
surface and cannot be easily converted into H2S. So deep
hydrodesulphurization (HDS) requires catalyst highly
active in hydrogenation, which is the preferred reaction
pathway [7-9].

1. INTRODUCTION
The hydroprocessing of heavy feeds has become a
worldwide challenge in the last decades and deeper
understending of this process has motivated researchers to
study it. Since lighter petroleum fracions of higher quality
are getting depleted, heavy crude oil, extra heavy crude
oil and bitumen account for 70% of the world reserves of
petroleum at the moment [1]. Together with decreased
crude oil quality, increased production of clean
automotive fuels is required [2]. Therefore, the major
hurdle researchers are trying to overcome is the
development of more efficient multifuncional catalysts.

The last, but not the least problematic are metals, usually
present as organometallic compounds in asphaltenes. The
most frequent ones are Ni and V, whose contents vary
from 50 to 500 ppm, depending on the nature of crude.
During hydroprocessing, hydrodemetallation leaves
deposits of metal sulfides in the catalyst pores, causing
decrease in the number of active catalytic sites and
impedes the reactants transport to the internal surface.
Regarding all aforementioned, the main objective of
heavy oil hydrotreating is to reduce the level of
asphaltenes, sulfur, nitrogen, and metals in the feed, so as
to enhance quality and quantity of liquid yield. In order to
achieve that, reactions governed by the catalysts, like
hydrogenation (HYD), hydrodesulphurization (HDS),
hydrodenitrogenation
(HDN),
hydrodemetallation
(HDM), hydrocracking (HC), etc. are required.

Heavy oil is a type of petroleum with API gravity less


than 20o and sulfur content often higher than 2 wt% [3].
The lower the API gravity of crude, the higher level of
different impurities present in crude. One of the most
problematic is asphaltene, high molecular weight
compound, composed of aromatic rings carrying alkyl
chains up to 30 carbon atoms [4]. Their quantity in heavy
crudes is significant, accounting from 11 to 25 wt% [5].
The major problem during hydroprocessing is its
deposition on the catalyst surface, triggering fast catalyst
deactivation and deposit formation. Moreover, as a large
molecule, with the aromatic core diameter of around 11 to
14 [6], its diffusion into catalyst pores is difficult.

Therefore, highly efficient hydrogenation catalysts are


essential for hydrotreating processes.
The most
important parameters in catalyst development are
considered to be its textural properties - meso or macro
pores [10, 11], appropriate acidity [12, 13] along with the
active phase dispersion [14]. The most often used active
metals are transition metals Mo/W as the main active
component, promoted with Co/Ni. Mixed silica and
alumina oxides are frequently used supports, because of
the possibility to control textural and acidic properties

Another undesirable compounds present in heavy feeds


are sulfur and nitrogen. Reactivity of sulfur species varies
widely,
with
4,6-dimethyldibenzotiophenes
(4,6DMDBT) being the most refractory one. This alkyl-

665

SYNTHESISOFRe/PdHETEROGENEOUSCATALYSTSSUPPORTEDONHMSUSINGSOLGELMETHOD.

with the appropriate preparation method [15, 16]. Other


materials like carbon [17], clays [18, 19] and zeolites [20]
are tested as well. Recently, doping with the noble metals
is investigated deeper, because some research revealed
that its addition can reduce deposit formation [13, 21, 22]
and enhance active metal dispersion and reducibility [23].
Moreover, their extraordinary hydrogenation ability
through hydrogen spillover makes them interesting for
hydroprocessing reactions [24, 25].

OTEH2016

palladium as a promoter. Ti-HMS with atomic ratio of


Si/Ti=40 was used as a support, relying on Zepedas
findings that this ratio provides the largest HDS activity
for DBTs [7]. Four slightly modified Ti-HMS supports
are made. In order to improve textural characteristics,
apart from conventional synthesis procedure and air
drying, mesitylene was added as a swelling agent and a
supercritical drying in excess solvent was applied. The Ti
incorporation is conformed with the FT-IR spectroscopy
and the morphology of the samples is analysed using
SEM.

Lately, the development of HDT catalysts is focused on


improvement of support textural characteristics [26].
Ordered mesoporous solids like MCM-41 [27] and HMS
[28-31] are considered as a promising supports. Their
advantages are enlarged pore dimensions, ranging in
nanometers scale from 3 to 30, suitable for diffusion of
large aromatic molecules. In addition to that, high surface
area (around 700-1000 m2/g) allows better metal
dispersion and more active sites. Corma et al. [32,33]
have already tested NiMo/MCM-41 for HDS, HDN and
HC reactions and reported better activity in comparison
with other supports-ASA and USY zeolite. Compared
with MCM-41, HMS mosostuctures possess thicker
framework walls, smaller crystallite size and superior
thermal stability [29,30]. Moreover, HMS seems
especially suitable for the synthesis of bifunctional
catalysts, due to its acid properties. MontesinosCastellanos et al. [34] have modified HMS with the
incorporation of heteroatoms within their framework and
reported that the presence of Al, Ti and Zr ions increased
the hydrogenation ability of NiMo catalysts in the
reaction of naphthalene. Similar conclusion Halachev et
al. [35] have made for NiW/(P)Ti-HMS catalyst. Zepeda
et al. [7, 36] stated increased pore diameter with Ti
incorporation, as a result of higher bond length of Si-O-Ti
than Si-O-Si length. Numerous studies made with the
conventional active metals CoMo, NiMo, CoW, NiW
supported on Al, Ti-HMS [7, 36-38] claim enhanced HDT
activity with this supports. Since HMS mesostucture with
incorporated metal ions has exceptional textural
characteristics and high hydrogenation capacity, it is a
good choice for the catalyst support.

2. EXPERIMENTAL
2.1. Materials
Ti-substituted mesoporous silicas were synthesized using
tetraethyl ortosilicate (TEOS, Aldrich 98%) and titanium
butoxide (Ti-but, Aldrich 97%) as inorganic precursors.
Surface directing agent was dodecylamine (C12H25NH2,
DDA, Aldrich 98%). The reaction mixture for two
supports was modified by the addition of the swelling
agent mesitylene (C9H12, Aldrich 98%). Ammonium
perrhenate (NH4ReO7, Aldrich 99%) and palladium
chloride (PdCl2, Aldrich 99,99%) were used as the active
phase precursors.
2.2. Preparation of the supports
In order to examine the influence of cosurfactant addition
and the way of drying on textural characteristics, four
different Ti-HMS supports were synthesized. The
procedure for the synthesis of pure hexagonal mesoporous
silica (HMS) was first purposed by Tanev and Pinnavaia
[43, 44]. It is based on the neutral SI templating route
and H-bonding between neutral inorganic precursor (I)
and neutral primary amine (S) as surface directing agent.
The incorporation of the Ti heteroatom was done
following the procedure published by Gontier and Tuel
[45].The molar composition of the reaction mixture was:
0.1TEOS:0.025Ti-but:0.65EtOH:0.1isopropyl alcohol :
0.027DDA: 3.6H2O : 0.002HCl. Basically, a first solution
(A) was obtained by mixing TEOS, Ti-but, EtOH and
isopropyl alcohol. A second solution (B) was dissolved
DDA in water and HCl. Solution A was slowly added to
solution B while vigorouslly stirred and the stirring was
maintained for half an hour. The reaction products were
aged at ambient temperature for 1.5 hours, filtered under
vacuum and washed several times with ethanol. Two
supports, designated as 1 and 2, were obtained using this
mixture in the following way: support 1 was dry in excess
of supercritical ethanol at 100 bars and 255C for half an
hour and support 2 was dried at room temperature for 24
hours. In order to increase the pore diameter, two
additional supports were prepared by the slight
modification of the aforementioned mixture with the
0.112 mol of the swelling agent mesitylene, as first
proposed by Kresge et al. [46]. Supports designated as 3
and 4 were dried in excess of supercritical ethanol on 150
bars and 250C for half an hour and in air 24 hours,
respectively. Subsequently, all samples were calcined in
air at 823K for 4h, with heating rate of 2.5K/min, being

Searching for a more efficient active phase,


nonconventional sulfides gained much attention. Rhenium
sulfide became interesting, since its activity in HDS was
positioned in the upper part of volcano curve, being batter
than Mo and W sulfides [39]. Following that result,
Escalona et al. [40, 41] and Sepulveda et al. [42] tested
Re sulfide on different supports in HDS of gas oil and
4,6-dimethyldibenzothiophene, respectively. In both
studies, the strong influence of the support on catalytic
activity is observed. Sepulveda found 6 times higher
intrinsic HDS activity on SiO2 than on alumina (for the
same metal loading), signifying silica as an appropriate
support for ReS2. However, only few studies could be
found in literature about effect of promoter on Re sulfide
activity in hydrotreating reactions.
Within the above framework, in order to develop more
active hydrotreating catalyst, the high hydrogenation
capacity of rhenium sulfide has motivated further research
of Re-based catalysts. The objective of this work is to
synthesize supported rhenium catalyst with addition of
666

SYNTHESISOFRe/PdHETEROGENEOUSCATALYSTSSUPPORTEDONHMSUSINGSOLGELMETHOD.

held for a half an hour on 823K.

OTEH2016

Si bonds [48-51]. Its exact position depends on chemical


composition of the sample and the instrument
characteristics. According to that, the band at ca. 970 cm-1
found in all four samples can be assigned to the stretching
vibration of as(Si-O-Ti) group. However, this is not a
definite proof, since Ti-free silicates also exhibit similar
band. Therefore, it is necessary to compare its intensity to
the intensity of the band at ca. 802 cm-1, an indication of
symmetric stretching vibration of s(SiOSi) [52]. When
the intensity of the band at ca. 970 cm-1 is enhanced
compared to the intensity of the band at ca. 802 cm-1, than
this can be taken as an indication of Ti-ions incorporation
[53]. This enhancement can be seen in Pic. 1. The more
enhanced it is, the more Ti-O-Si linkages are formed.

2.3. Preparation of the catalysts


For the preparation of the Pd promoted Re catalysts, with
1 wt% of Pd, the similar procedure for the supports 1 and
2 was used, except for dissolving the appropriate amount
of PdCl2 in the solution B, before addition of solution A.
The following steps were the same until drying, when Re
was added. Two methods of impregnation were adopted.
In both catalysts, Re content was 15 wt%.
For the first catalyst, impregnation using supercritical
solvent is applied. A 5 ml of aqueous solution of
NH4ReO7 was added to 150 ml ethanol, before mixing
with Pd/(Ti-HMS) filtrate. The mixture was introduced
into 300 cm3 batch autoclave, heated until supercritical
state of solvent was reached and kept for half an hour at
the pressure of 100 bars and temperature of 250C.
Subsequently the solvent was released. The sample was
denoted as RePd/(Ti-HMS)SC, imp.
For the second catalyst, wetness impregnation of the
Pd/(Ti-HMS) was employed. Aqueous solution of
NH4ReO7 was mixed with filtered Pd/(Ti-HMS). Under
constant stirring, the slurry was heated to 373K to remove
the water and ammonia, followed by air drying for 24
hours. The sample was denoted as RePd/(Ti-HMS)W, imp.
Both catalysts were calcined in air at 823K for 4h, with
heating rate of 2.5K/min, being held at 823K for half an
hour.
2.4. Characterization of the samples
The Ti-ion incorporation in all four supports was
conformed on FT-IR spectrophotometer, Thermo, Nicolet
iS10, with R, operating in wavelength range from
4000 cm-1 to 600 cm-1.

Picture 1. FR-IR spectrum of calcined supports


Also, hydrogen bonding between silanol groups is
displayed by the broad band at ca. 3380 cm-1. This band,
found in the hydroxyl region, is an indication of OH(Si
OH) vibration. In the pure silicates, it is usually found at
higher wavelenght, ca. 3445 cm-1. This shift reveals more
hydrogen bonding with Ti incorporation caused by the
existance of defective sites.

The substrate and catalysts morphology and metals


concentrations were determined by Scanning Electron
Microscope (SEM) JEOL JSM-6610 LV, equipped with
an Energy Dispersive X-ray Spectrometer (EDS) Oxford
X-Max. To reduce charging effect, the samples were
coated with thin (5nm) conducting layer of gold. An
electron microprobe used in an energy dispersive mode
(EDX) was employed to obtain information on the
titanium dispersion in the samples.

3.2. SEM micrographs of calcined supports


SEM-EDS analyses were used to examine the
morphology of the particles and the homogeneity of the
Ti distribution in the HMS framework.

3. RESULTS AND DISCUSSION

In Picture 2 the SEM images of the studied samples are


presented. These micrographs show that Ti-HMS is
comprised of sphere-shaped particles. This is in
accordance with previous findings of other authors [45].
These particles are organized in nonuniform
agglomerates. Comparing particle sizes of the supports
prepared without mesithilene (images 1 and 2) indicates
that particles of support dried using excess of SC solvent
are smaller in diameter. This should influence the increase
of textural mesoporosity, which does not arise from the
framework pore volume, but from the interparticle voids.

3.1. FT-IR spectroscopy of calcined supports


The easiest way to determine Ti-O-Si bonds formation is
to use the IR spectroscopy. The IR spectra of four
supports are shown in Picture1.
The asymmetric vibration of Si-O-Si bond shows band at
1087 cm-1 and its shift to lower wavenumbers is observed
when Ti-ions are incorporated into the HMS structure
[47]. This band is positioned at ca. 1073 cm-1 for all four
measured supports. Moreover, the literature reports that
the IR band appeared in the region from 910 to 970 cm-1,
a characteristic vibration caused by the formation of Ti-O667

SYNTHESISOFRe/PdHETEROGENEOUSCATALYSTSSUPPORTEDONHMSUSINGSOLGELMETHOD.

1)Ti-HMS, SC dried

OTEH2016

3)Ti-HMS, mesithilene, SC dried

2)Ti-HMS, air dried

4)Ti-HMS, mesithilene, air dried

Picture 2. SEM micrographs of calcined supports.

Further sorption measurements are necessary to confirm


this assumption. Supports prepared with the cosolvent
addition (images 3 and 4) seem to be comprised mostly of
even smaller particles. The broaden particle size
distribution is observed in sample obtained by the SC
drying, with aggregates of particles bigger than 1m
found sporadically. According to previous findings,
mesithilene should alter the framework pore dimensions,
but also expand the textural mesoporosity, depending on
the polarity of the reaction medium [31].

RePd/(Ti-HMS)SC, imp.

SEM micrographs after active phase addition are shown


in Picture 3. It seems that morphology of the support was
not altered after the active metals surcharge, neither in SC
impregnation nor in wetness impregnation.
In EDX images shown in Picture 4 good Ti dispersion in
HMS is observed for all samples

RePd/(Ti-HMS)W, imp.

In order to determine active metals concentrations in


synthesized catalysts, EDS measurements were done. The
mean results are given in Table 1. Slightly better
incorporation of rhenium is found in catalyst with Re
addition in wetness impregnation in comparison to SC
solvent impregnation. Since Pd is incorporated through
synthesis in both cases, similar results are obtained.
Table 1. EDS results for two catalysts.
Element
Pd
Re

Picture 3. SEM micrographs of calcined catalysts.


668

Wt%
RePd/(Ti-HMS)SC, imp. RePd/(Ti-HMS)W, imp.
0.83
0.92
11.41
14.62

SYNTHESISOFRe/PdHETEROGENEOUSCATALYSTSSUPPORTEDONHMSUSINGSOLGELMETHOD.

OTEH2016

3)Ti-HMS, mesithilene, SC dried

1)Ti-HMS, SC dried

2)Ti-HMS, air dried

4)Ti-HMS, mesithilene, air dried

Picture 4. EDS images of Ti dispersion in supports.

Additional work in progress is necessary to confirm


mesoscale pore diameters as well as to test catalysts
activity.

5. CONCLUSION
In the present work, four Ti-HMS supports have been
synthesized. The framework mesostructure was
assembled using neutral SI pathway. Dodecylamine is
used as a structure directing agent and mesitylene as an
auxiliary pore size modifier for two supports. Obtained
materials were dried using both conventional and
supercritical drying method. Supercritical drying was
performed by phase change to supercritical state followed
by subsequent removal of the solvent under supercritical
conditions, thereby eliminating multi-phase conditions in
the catalyst pores.
Two supports without cosurfactant are used for the
preparation of PdRe catalysts Active metal addition is
done by both - wetness impregnation of the support, as
well as by the impregnation with the assistance of
supercritical solvent. The promoter is added through
synthesis in both cases. In the previous reports,
researchers have found that the neutral templating
synthesis method is very suitable for the synthesis of
mesoporous catalytic materials, which greatly facilitates
access to the framework mesopores.
The synthesized supports were characterized with
Fourier-transform infrared spectroscopy and the analysis
of FTIR spectra has confirmed Ti ion incorporation into
HMS framework. Also, high Ti dispersion in all supports
but diverse particle texture is observed using SEM-EDS.
Good active metals incorporation is found, as well.

References
www.iae.org, Statistical Review of World Energy.
www.epa.org, Environmental Protecting Agency.
Speight,J.G.: The Chemisty and Technology of Petroleum,
5th edn., CRC Press, Teylor & Francis Gorup, Boca
Raton FL., 2013.
Demirbas,A.: Petrol. Sci. Tech. 20 (5-6) (2002) 485.
Ancheyta,J., Trejo,F., Rana,M.S.: Asphaltenes: Chemical
Transformations During Hydroprocessing of Heavy Oils,
CRC Press, Teylor & Francis Gorup, Boca Raton, FL.,
2009.
Andersen,S.I., Jensen,J.O., Speight,J.G.: Energy Fuels 19
(6) (2005) 2371-2377.
Zepeda,T.A., Pawelec,B., Fierro,J.L.G., Halachev,T.:
Appl. Cat. B: Env. 71 (2007) 223.
Landau,M.V., Berger,D., Herskowitz,M.: J. Catal. 158
(1996) 236.
Rothlisberger,A., Prins,R.: J. Catal. 235 (2005) 229.
Rana,M.S., Ancheyta,J., Rajo,P., Maity,S.K.: Catalyst
Today 98 (2004) 151.
Ancheyta,J., Rana,M.S., Furimsky,E.: Catal. Today 109
(2005) 1.
Leyva,C., Rana,M.S., Trejo,F., Ancheyta,J.: Catalyst
Today 141 (1-2) (2009) 168.

669

SYNTHESISOFRe/PdHETEROGENEOUSCATALYSTSSUPPORTEDONHMSUSINGSOLGELMETHOD.

Rayo,R., Ramirez,J., Ancheyta,J., Rana,M.S.: Petrol. Sci.


Technol. 25 (2007) 215.
Eijsbouts,S., Heinemann,J.J.L., Elzerman,H.J.W.: Appl.
Catal. A:Gen. 105 (1993) 69.
Leyva,C., Ancheyta,J., Travert,A., Mauge,F., Mariey,L.,
Ramirez,J., Rana,M.S.: Appl. Cat. A:Gen. 425-426
(2012)1.
Ali,M.A., Tatsumi,T., Masuda,T.: Appl. Cat. A: Gen. 233
(2002) 77.
Prins,R., de Beer,V.J.H., Somorjai,G.: A., Catal. Rev. Sci.
Eng. 31 (1989) 1.
Maity,S.K., Srinivas,B.N., Prasad,V.V.D.N.: Singh, A.,
Dhar, G.M., Prasada Rao T.S.R., Surf. Sci. Catal. 113
(1998) 579.
Hossain,M.M., Al-Saleh,M.A., Shalabi,M.A., Kimura,T.,
Inui,T.: Appl. Cat. A: Gen. 278 (2004) 65.
Li,D., Nishijima,A., Morris,D.E.: J. Catal. 182 (1999)
339.
Al-Saleh,M.A., Hossain,M.M., Shalabi,M.A., Kimra,T.,
Inui,T.: Appl. Catal. A 253 (2003) 453.
Hossain,M.: Chem. Eng. J. 123 (2006) 15.
Venezia,A.M., Murania,R., Pantaleo,G., Deganello,G.:
Gold Bull. 40 (2007) 130.
Niquille-Rothlisberger,A., Prins,R.: Catal. Today 123
(2007) 198.
Niquille-Rothlisberger,A., Prins,R.: J. Catal. 242 (2006)
207.
Mochida,I., Choi,K.H.: J. Jpn. Petrol. Inst. 47 (2004) 145.
Klimova,T., Calderon,M., Ramirez,J.: Appl. Catal. A:
Gen. 240 (2003) 29.
Chiranjeevi,T., Kumar,P., Rana,M.S., Dhar,G.M.,
Prasada,Rao,T.S.R.: J. Mol. Catal. A: Chem. 181 (2002)
109.
Tanev P.T., Pinnavaia,T.J.: Chem. Mater., 8 (1996) 2068.
Pauly T.R., Pinnavaia, T.J., Chem. Mater., 13 (2001) 987.
Zang,W., Pauly,T.R., Pinnavaia,T.J.: Chem. Mater., 9
(1997) 2491-2498.
Corma,A., Martinez,A., Martinez-Soria,V., Monton,J.B.:
J. Catal. 153 (1995) 25.
Corma,A., Martnez,A., Martinez-Soria,V.: J. Catal. 169
(1997) 480.
Montesinos-Castellanos,A., Zepeda,T.A.: Microp. Mesop.

OTEH2016

Mat. 113 (2008) 146.


Halachev,T., Nava,R., Dimitrov,L.D.: Appl. Catal. A:
Gen. 169 (1998) 111.
Zepeda,T.A., Pawelec,B., Halachev,T.,Fierro,J.L.G.: J.
Catal. 242 (2) (2006) 254.
Chiranjeevi,T., Kumar,P., Maity,S.K., Rana,M.S.: Murali
Dhar, G., Prasada Rao, T.S.R., Microp. Mesop. Mat. 44
(2001) 547.
Nava,R., Morales,J., Alonso,G., Ornelas,C., Pawelec,B.,
Fierro,J.L.G.: Appl. Catal. A: General 321 (2007) 58.
Pecoraro,T.A., Chianelli,R.R.: J. Catal. 67 (1981) 430.
Escalona,N., Ojeda,J., Cid,R., Alves,G., Lopez Agudo,A.,
Fierro,J.L.G.Gil, Llambias,F.J.: Appl. Cat. A: Gen. 234
(2002) 45.
Escalona, N., Yates, M., Avila, P., Lopez Agudo, A.,
Fierro, J.L.G., Ojeda, J., Gil Llambias, F.J. Appl. Cat. A:
Gen. 240 (2003) 151.
Sepulveda,C., Bellire,V., Laurenti,D., Escalona,N.,
Garca,R., Geantet,C.: Vrinat, M. Appl. Catal. A: Gen.
393 (2011) 288.
Tanev,P., Chibwe,M., Pinnavaia,T.J.: Nature 368 (1994)
321.
Tanev,P., Pinnavaia,T.J.: Chem. Mater., 8 (1996) 2068.
Gontier,S., Tuel,A.: Zeolites 15 (1995) 601.
Kresge,C.T., Leonowich,M.E., Roth,W.J., Vartuli,J.C.,
Beck,J.S.: Nature 359 (1992) 710.
Damyanova,S., Dimitrov,L., Mariscal,R., Fierro,J.L.G.,
Petrov,L., Sobrados,I.: Appl. Catal. A: Gen. 256 (2003)
183.
Gao,X., Wachs,I.E.: Catal. Today 51 (1999) 233.
Schraml-Marth,M.,
Walther,K.L.,
Wokaun,A.,
Handy,B.E., Bajker,A.: J. Non-Cryst. Solids 143, (1992)
93.
Beghi,M., Chiurlo,P., Costa,L., Palladino,M., Pirini,M.F.:
J. Non-Cryst. Solids 145 (1992) 175.
Salvado,I.M.M., Navarro,J.M.F.: J. Non-Cryst. Solids
(147-146) (1992) 256.
Duran,A., Serna,C., Fornes,V., Fernandez-Navarro,J.M.:
J. Non-Cryst. Solids 82 (1986) 69.
Boccuti,M.R., Rao,K.M., Zecchina,A., Leofanti,G.,
Petrini: Stud. Surf. Sci. Catal. 48 (1989) 133.

670

UNDERSTANDING PLASMA SPRAYING PROCESS AND APPLICATION


IN DEFENSE INDUSTRY
BOGDAN NEDI
University of Kragujevac, Faculty of engineering, Kragujevac, nedic@kg.ac.rs
MARKO JANKOVI
Yugoimport SDPR, Belgrade, marko_grosnica@yahoo.com

Abstract: These paper presents importance to understand plasma spraying process. The historical development of
process as well as the application in defense and metal industry is described. Complexity of the process is showed by
three separated but interrelated processes, such as plasma generation, plasma-particle interactions, and coating
formation. For each of the three mentioned phases are analyzed the problems that may occur, earlier research in this
area, important parameters. In conclusion are listed further directions of the development process and research.
Keywords: plasma spray, particle-interaction, coating, splat.
feasible method of implementation and maintenance of a
continuous temperature in order of 20000 K, in some
cases up to 50000 K. In the past ten years the focus of
research has been directed to one important type of
application of this technology, the application of plasma
in a plasma spray process.

1. INTRODUCTION
Plasma is an electrically conductive gas consisting of
ions, electrons and neutral molecules. Plasma is produced
by transferring energy into a gas until the energy level is
sufficient to ionize the gas, allowing the electrons and
ions to act independently of one another. The plasma state
is achieved when, under an electric field, currents can be
sustained as the free electrons move through the ionized
gas. Once the energy input is removed, the electrons and
ions recombine, releasing heat and light energy [1].

The development of the plasma spray process is a result


of the attempt to increase the temperature level above that
of an oxy-acetylene flame. The main reason for switching
from conventional methods of deposition to plasma jet
deposition is to increase the temperature level and to
control the jet atmosphere.

Until now, the plasma finds application in several


technical and technological processes as: plasma cutting,
plasma welding, plasma spraying, plasma, nitration,
plasma physical vapor deposition (PVD), etc. This paper
will be described plasma spraying process. The aim of the
work is to demonstrate the importance of this process in
the defense and metal industry, the complexity of the
process, the problems that may occur, so far performed
research in this area, further directions of the development
process and research.

2. PLASMA SPRAY PROCESS


The Plasma Spray Process is basically the spraying of
molten or heat softened material onto a surface to provide
a coating [3]. Powder material is injected into a very high
temperature plasma flame, where it is rapidly heated and
accelerated to a high velocity. The hot material impacts
on the substrate surface and rapidly cools forming a
coating. This plasma spray process carried out correctly is
called a "cold process" (relative to the substrate material
being coated) as the substrate temperature can be kept low
during processing avoiding damage, metallurgical
changes and distortion to the substrate material. So the
advantage of the process is that molten powder particles
with high melting points do not transfer a large amount of
heat into the base material and do not violate the basic
structure of the material [4].

The first idea of a plasma spray process was patented in


1909 in Germany, and the first structural plasma
installation appeared in the 1960s, as the product of two
American companies Plasmadyne and Union Carbide. In
the middle of the 70's, in Switzerland, the company
Plasma-Technik AG was founded, and about 20 years
later this company merged with U.S. company Metco and
formed new company named Sulzer Metco, which is
today one of the leading companies in the production of
equipment for plasma installation [2]. Big growth spurt
occurred in the 1980s with the invention of the vacuum
plasma spraying/low pressure plasma spraying
(VPS/LPPS).

Plasma sprayed coatings are generally much denser,


stronger and cleaner than the other thermal spray
processes with the exception of HVOF - High Velocity
Oxy Fuel, HVAF - High Velocity Air Fuel and cold spray
processes. Plasma spray coatings probably account for the
widest range of thermal spray coatings and applications
and make this process the most versatile.

Plasma jet has begun to develop intensively together with


the development of cosmic technology. Actually, it has
been shown that this technique is the only technically
671

UNDERSTANDINGPLASMASPRAYINGPROCESSANDAPPLICATIONINDEFENSEINDUSTRY

OTEH2016

Picture 1. Plasma spray process


Plasma spraying is increasingly used for the dissociation
of raw materials such as carbonates, oxides, sulfides, and
various polymetallic ores. It can be used almost all
material which can be melted.

LPPS.
Development of the VPS technology has led to significant
improvements in the quality of coatings compared to
coatings produced at atmospheric pressure. VPS coatings
generally show a higher density than coatings deposited
by the atmospheric plasma spray process. The velocities
and temperature values of particles in the plasma jet are
more uniform, which allows the production of
homogeneous coatings of uniform thickness on parts with
complex geometries.

Different combinations of surface layers significantly


increase the resistance of the working parts to: wear,
abrasion, erosion, cavitations, corrosion and fatigue
resistance at low and elevated temperatures. For example,
in aircraft jet engines, many of the component parts are
subjected to very high temperatures as well as a corrosive
and erosive environment. To limit wear and tear on these
components and also give them thermal protection, a thin
coating of a ceramic material called yttria stabilised
zirconia (Y2O3 and ZrO2) sprayed by plasma onto the
component surfaces. The 1986 total world sales for
ceramic coatings was $1.1 billion of which 12% were
used in military industries. World sales consumption of
ceramic coatings at the 2000 was three times higher than
the 1986, which clearly shows the importance to increase
the resistance of the working parts. Of all ceramic
coatings 23% applied by thermal spray processes [5].

The disadvantage of the plasma spray processes is a size


limit of the cavities (coating can be applied only on
cavities in which guns or torches can enter) and
complexity of process. For better understanding process
and progress, plasma spray must be analyzed by three
separated but interrelated processes, such as plasma
generation, plasma-particle interactions (usually called
particle heating zone), and coating formation (usually
called deposition zone) [7]. The above of three processes
can be exercised some controls by operator to maintain a
better particles melting and motion and finally get better
coatings.

Coatings have range from engine to landing-gear


applications. In landing-gear applications, these coatings
are replacing hard chromium plating as the preferred
coating method to provide improved performance.

2.1. Plasma generation


The different methods of plasma generation are
applications of gas ionization. In principle, energy will be
transferred to atoms or molecules in an elementary
process that is sufficient to initiate ionization. There exist
two basic mechanisms:

In the defense industry, plasma spray coatings are used


across the various branches to enhance and protect
components on military platforms (aircraft, surface ship,
submarine, etc.), and in machinery and weapon systems.
Some typical plasma spraying applications in defense
industry are [5, 6]:

1.

Increase of energy of all internal degrees of freedom


of the gas by application of heat. This application can
be accomplished directly through the container walls,
or indirectly trough chemical processes, compression
or an electrical current. Plasma is generated by
collision ionization of the particles and photo
ionization of the electromagnetic radiation in the hot
gas. Such plasmas are generally close to their
thermodynamic equilibrium state (isothermal
plasmas).

2.

Transfer of energy for effective ionization without


substantial temperature increase of the gas through
particle or electromagnetic radiation and electrical
current, respectively. Such plasmas are no
equilibrium plasmas with a high electron temperature
Tee>>Tgas (no isothermal plasma).

Application of tungsten carbide / cobalt coating on the


wear surfaces of the sealing labyrinth ring compressor
jet engine, compressor rotor blades, cylinders, pins, etc;
Application of protective coatings based on zirconium
combustion chambers and turbine rotor blades;
Super alloys for aerospace gas turbine vanes and
shrouds to prevent hot gases erosion and corrosion;
Application of molybdenum on the piston rings of
diesel engines;
High-temperature superconducting coatings
electromagnetic interference (EMI) shielding.

for

The couple plasma spraying techniques are developed by


the surrounding atmosphere. The most commonly used
are atmospheric pressure - APS and low pressure - VPS or
672

UNDERSTANDINGPLASMASPRAYINGPROCESSANDAPPLICATIONINDEFENSEINDUSTRY

powder, particle size and shape (spherical particles


improve the fluidity, "polyhedral" leads to unequal
heating of the particles), range of particle size is
aiming for a larger interval in which are the particles
of a given powder (for example, within the narrow
interval of 22-45m, where the difference in powder
size is only twice, the difference in masses can be up
to 8 times). It is very important to establish a
compromise, because the large narrowing of particle
size interval is neither technologically nor
economically beneficial.

2.2. Plasma - particle interactions


The material usually enters into the plasma jet in the form
of powder, but also it can be used form of solid wire, rods
and filled wires. One of the main materials requirements
is that melting must occur with-out decomposition or
sublimation. At this stage the transport phenomena of the
momentum transfer and heat transfer are occurring, and
for these processes importance of the following
parameters is important [2]:
1.

2.

Powder injector it must have a good permeability


for the powder and as small as possible exit hole
because it is needs to provide the small angle of
injection (resulting from the refusal the powder
particles of the nozzle walls). Injector nozzle
diameters are in the range from 1.2 to 2 mm. For
injectors of the same diameter, the carrier gas flow
rate must be increased when using a curved injector,
as compared to a straight one. The sharper is the
curvature radius; the greater must be the carrier gas
flow rate. Research [8] has shown that the plasma
torch exit for the 1.5 mm injector is more compact
than the corresponding profile for the 1.8 mm
injector. This implies that more particles bypassed
the plasma jet when using the larger diameter
injector.

Ideally, powders should be free flowing, normally


distributed, and free of unwanted fine particulates. Fines
are often called satellites, because they adhere to larger
particles. The ideal situation rarely exists, so it is
necessary to understand the characteristics of powders
that make them usable for feeding and spraying.
4.

Construction of powder feeder mass input is in the


range from a couple of pounds per hour up to several
tens of kilograms per hour, the feeder has to provide
a uniform, reproductive input for powders with very
different characteristics (which provides the
"fluidity" of powders).

Environment for spraying when plasma exits nozzle


the heat loss occurs due to radiation and
redistribution of heat from the plasma to entered
material substrate. Cooling of the plasma leads to a
reduction of plasma viscosity, but increasing the
turbulence (also the Reynolds number increases).
There may be an interference of ambient gas and
plasma. In the case of APS process, a significant
amount of air that can lead to oxidation of particles is
injected into the plasma.

Beginning in the late 1960s and into the early 1970s,


developments in plasma spray focused on controlledatmosphere spraying. These developments were based on
the concept that if inert gases could be substituted for air
as the surrounding environment, which mixes into the
plasma spray jets, then coating oxide inclusions could be
reduced or eliminated.

When the powder feed rate is increased at fixed plasma


conditions, the temperature and velocity in the plasma jet
are reduced due to increased transfer of energy and
momentum to the particles. In addition, the particle
velocities at the injector exit have less momentum.
Therefore, the penetration of particles in the plasma jet is
more difficult.

If oxidation has to be reduced to a minimum the process


can be carried out in an inert atmosphere or in a vacuum
(VPS). By switching to VPS mode plasmatron parameters
are changed (plasma temperature decreases, the ratio of
voltage and current changes, the speed of plasma
increases and plasma is significantly elongated
geometrically), therefore plasmatron power used in the
VPS process must be much higher than in the APS
process.

One particularly common phenomenon which often


happens is powder overloading in the powder feed tube.
The technical term for this behavior is saltation, and it is
physically manifested when powder does not flow evenly
but appears to stop and start at irregular intervals every
few seconds. Increasing the carrier gas flow rate may
smooth out this erratic, irreproducible flow behavior, but,
in general, this solution has the undesirable effect of
causing the particles to have too high a momentum so that
they do not pass through the optimum part of the thermal
spray source. It is thus valuable to examine the types of
powder feed delivery systems used and the requirements
that constitute good powder feeding. A specific powder
may not have identical feeding characteristics with
different powder feeding devices [1].
3.

OTEH2016

The plasma jet momentum depends on the mass flow rate


of the plasma-forming gas and the plasma jet velocity.
The first parameter depends on the total gas flow rate and
the fraction of heavy gases (argon or nitrogen) within it;
the low-density gases such as helium and hydrogen
contribute little to the gas mass flow rate. When the
plasma gas mass flow rate decreases, the particle
penetration also becomes easier. When comparing a
ternary mixture with a classical Ar-H2 mixture,
penetration at the lower mass flow rates is easier, but
since the velocity of the 50 slm ternary mixture is greater
than that of the binary mixture, the particle penetration is
more difficult.

Powder characteristics several factors that may were


identified. These factors that may have a strong
impact on properties of both particles and coatings
are as follows: specific characteristics of the powder
considering its technological process by which the
powder is obtained, powder fluidity, bulk density of

The plasma velocity depends on the plasma column


constriction within the anode nozzle. This constriction is a
function of the relative diameter of the plasma column
673

UNDERSTANDINGPLASMASPRAYINGPROCESSANDAPPLICATIONINDEFENSEINDUSTRY

compared to that of the nozzle. The plasma column


diameter increases with arc current and decreases with
increasing fraction of He and, especially, amount of H2 in
the feed gas. Since the jet velocity increases roughly as,
an increase in I at constant mass flow rate makes particle
penetration more difficult. The optimum injection can be
determined as described earlier.

OTEH2016

2.3. Coating formation


Formation of coatings is a stochastic process in which
particles with certain size, velocity, and temperature
distributions impact on a substrate. In general, the
deposition process of the sprayed particles can be divided
into three steps as illustrated in Picture 2 [9].

Picture 2. Scheme of the splat formation process


When hitting the substrate, each molten droplet splats
onto the surface, forming a pancake-like structure that
rapidly solidifies. It is important that the droplet
thoroughly wets the substrate surface, and attention to
the composition of the coating material needs to be made
to ensure that this happens.

Each splat has a thickness in the micrometer range and a


length that varies across the range from several to above
100 micrometers. Splats overlap one another as the
deposit builds up to the required thickness. Often, there
are small voids present as well as inclusions of rogue
materials such as metal oxides. These can interfere with
the mechanical strength of the coating and lead to poor
adhesion to the substrate.

Picture 3. Splat-formation
Recent research [10] has shown that, if a molten droplet,
on hitting the substrate surface, forms a disc-shaped splat
rather than a splashed splat, the coating formed tends to
have good adhesion and cohesion with reduced void
space. At disc-shaped splat, the dynamic impact pressure
of the in-flight particle prior to the impinging should be
perpendicular to the substrate surface. When the splashing
or fragmentation of the splat occurred, the flight direction
of the splashing fingers should parallel to the substrate
surface, no dynamic pressure might have an effect on the
splashing fingers. Hence, few interlocking might be
formed. In other words, the existence of splash finger
might be the weak point for the adhesive between splat
and substrate surface. The shape of splats plays a crucial
role in determining the physical properties of the coating.

Research [9] has shown that, splat shapes changed


transitionally from splash type to disk-shaped one as
increasing substrate temperature or reducing ambient
pressure.
As the splat formation process always finishes in several
to several tens microseconds, and the feedstock size is
about only several tens micrometers, it is quite difficult to
observe the splat formation process directly with the
prevailing technology. So in literature can be found
several developed three-dimensional simulation of droplet
impact and solidification under plasma spraying
conditions. These solidification models, among other
things, described so far assume solidification and phase
change occurs under thermodynamic equilibrium, i.e., at
droplet melting temperature. The validity of this
674

OTEH2016

UNDERSTANDINGPLASMASPRAYINGPROCESSANDAPPLICATIONINDEFENSEINDUSTRY

assumption depends on the heat flux rate to the substrate.


In the literature it is known as Undercooling effects.
Research [11, 12] has shown that undercooling may result
in a significant decrease in spread ratio, defined as the
splat-to-droplet diameter ratio.

Researchers say that there are many parameters which can


potentially influence the properties of the coatings. For
economic and theoretical reasons it is not possible to
control all possible parameter variations. The most
common control parameters at plasma spray are: power
input, arc gas pressure, auxiliary gas pressure (helium,
hydrogen, nitrogen), powder gas pressure, powder feed
rate, grain size/shape, injection angle (orthogonal,
downstream, upstream), surface roughness, substrate
heating, spray distance, spray divergence, spray
atmosphere.

How much is plasma spray complexity process shows


influential factors on splat formation done by research on
university of Waikato (Picture 4). It is clear that on splat
formation involved a lot of factors.

Sprayling conditions
Feedstock properties
Flowability
Homogeneity
Apparent density
Shape and morphology
Particle size distribution
Manufacturing technique

Physical set up

Torch traverse speed


Torch traverse pattern
Powder feeder position
Tilting projection
Cooling air

Coefficient of thermal expansion


Preheating temperature
Thermal conductivity
Size and geometry
Roughness
Chemistry
Substrate conditions

Input power
Gas mixture
Gas flow rate
Carrier gas flow rate
Powder feed rate
Feed port diameter
Stand off distance
Environmental composition

Thermal spray process


Spray method
Nozzle geometry
Process temperature
Particle temperature
Particle velocity
Particle trajectory

Particle dwell time


Material
Impact conditions
Density
Residual stresses
Size distribution
Surface tension
Particle morphology
Jet fluctuation
Melting temperature
Cooling rate
Viscosity relationship
Wetability
Particle parameters
Translent properties

Splat
formation

Picture 4. Influential factors on splat formation


These parameters can control a variety of secondary
parameters such as quench rate, residence time of
particles in jet, gas composition of plasma jet, heat
content, deposition efficiency etc.

matter, as well as, subsequent melting of particles formed


in situ. As the plasma power increases to about 46 kW,
[13] the shape of the molten splats attains the preferred
disk-shaped feature. However, at further higher plasma
power of 49 kW, significant splashing or disintegration of
splats was observed, which could be a result of the in situ
formation of hollow particles under high surface
evaporation rates.

Some of these parameters are analyzed in part plasma particle interactions, while the effect on other parameters
on coating will be discussed below.
Javad Mostaghimi reported [11] that splashing and breakup are primarily caused by solidification. Delaying
solidification by raising the substrate temperature results
in disk-shape splats with no break-up. Factor must be
considered is the temperature at the particles interface
with the substrate on impact. This contact temperature
influences the adhesion of the splats as well as the
adhesion of the coating to the substrate. Also Javad
Mostaghimi reported that gases may be entrapped under
an impacting drop, which resulting in generation of small
voids under the splat. This is caused by the rapid increase
in gas pressure between an impacting droplet and the
substrate. The rise in pressure deforms the drop and
results in gas entrapment.

The properties of the surface of the substrate also need to


be taken into account. In most industrial settings, the
pieces arriving at the spraying unit are new or covered
with old coatings. Each piece needs to be thoroughly
cleaned and then the surface roughened by abrasive grit
blasting. Thorough surface preparation ensures that a
good mechanical bond between the coating and the
substrate can be achieved.
In the case of splats deposited on a smoother surface,
research [14] shows existence pores in the center of the
splats and droplets around them. When this surface is preheated prior to deposition, a much more homogeneous
splat is achieved, without pores or droplets. Therefore it is
clearly shown that lower roughness and substrate pre
heating increases wettability, allowing better mechanical
anchorage of the particle to the substrate. For a rougher
substrate without pre-heating, the particle disintegrates
upon impact as the droplets are an indication of weak
adhesion. Preheating, on the other hand, appears to favor
the wettability of the substrate by the particle upon
impact, enabling superior adhesion even though droplets
may still occur

The physical status of particles at the point of impact with


the substrate is a key factor influencing the microstructure
of the coatings, as well as, its other characteristics. At
lower plasma power levels, very few molten splats with
spherical particles attached to irregular gel-like features
were observed. With increase in plasma power levels, the
fraction of molten splats increases and the spherical
particles progressively disappear, as the increasing
thermal energy promotes reduced content of unpyrolyzed
675

UNDERSTANDINGPLASMASPRAYINGPROCESSANDAPPLICATIONINDEFENSEINDUSTRY

Wei-Ze Wang [15] researched effect of spray distance on


the mechanical properties of plasma sprayed Ni-45Cr
coatings. When the spray distance was 130 mm, the mean
fracture toughness was 330 J/m2. With the increase of
spray distance the fracture toughness tended to increase
and reached to the maximum at spray distance of 190
mm. With a further increase the spray distance at 210
mm, fracture toughness decreased evidently. On the other
hand, the spray distance has no evident influence on the
elastic modulus and Poissons ratio of plasma sprayed Ni45Cr coatings.

OTEH2016

on-line real-time feedback control, intelligent SPC, design


of new equipment and spray powders as well as 3Dprocess modeling and improved understanding of the
complex nonlinear physics underlying the plasma spray
process.

ACKNOWLEDGMENT
*

This paper is part of project TR35034 The research of


modern non-conventional technologies application in
manufacturing companies with the aim of increase
efficiency of use, product quality, reduce of costs and
save energy and materials, funded by the Ministry of
Education, Science and of Republic of Serbia

Coating density and bond strength depend to a large


degree on particle velocities. In conventional wire arc
spraying, the velocities of the larger particles are
relatively low, limiting the bond strength of the coating.
Spraying with secondary [16] gas atomization results in
more uniform particle size distributions, more focused
spray patterns, higher particle velocities, and improved
coating properties. A primary axial air stream removes the
molten metal droplets from the wire tips and from the area
of the wire intersection, and a secondary air stream forms
a conical sheath around the axial air stream. The primary
gas stream and the secondary gas stream emerge from the
nozzle as coaxial gas streams, thus tending to protect the

References
[1] Davis, J. R., Handbook of Thermal Spray Tachnology, ASM International and the Thermal Spray
Society, 2004
[2] Rui, J., Vilotijevi, M., Boi, D., Rai, K.,
Understanding plasma spraying process and
characteristics of DC-arc plasma gun (PJ-100),
Metall. Mater. Eng. Vol 18 (4) 2012 p. 273-282
[3] http://www.gordonengland.co.uk/ps.htm 3.03.2016
[4] Mrdak, M. R., Characteristics of aps and vps plasma
spray processes, Military technical courier, 2015.,
Vol. LXIII, No. 3
[5] Heimann, R. B., Plasma-Spray Coating: Principles
and Applications, Weinheim; New York; Basel;
Cambridge; Tokyo: VCH, 1996
[6] http://www.orao.aero/strana.php?id=22 18.03.2016
[7] Pei, W., Zhengying, W., Guangxi, Z., Jun, D., Bai Y,
The analysis of melting and refining process for inflight particles in supersonic plasma spraying,
Computational Materials Science 103 (2015) 819
[8] Vardelle, M., Vardelle, A., Fauchais, P., Li, K. I.
Dussoubs, B., Themelis, N. J., Controlling Particle
Injection in Plasma Spraying, Journal of Thermal

droplets from entrained air and to concentrate the flow


pattern of the droplets. Lower coating porosity and higher
bond strength are expected from the secondary gas injection.

Type of atomization gas has an influence on oxidation.


Use of air as atomizing gas results in increased oxidation
when the particle size is reduced, using no oxidizing gases
such as nitrogen or carbon dioxide in combination with a
shroud can minimize this oxidation. The higher gas
velocities and the associated smaller particle sizes and
higher particle velocities result in higher density coatings
and improved adhesion of the coatings.

3. CONCLUSION
Until now, it was determined that the greatest advantage
of plasma spray process is the extremely wide variety of
materials that are suitable for the formation of coatings
(any material that melts without changing its
characteristics can be used). The next advantage is that
these materials with a high melting point can easily form
a coating without transferring large amount of heat and
without damaging the characteristics of the substrate.
Heat transfer from the coating to the substrate, is a
function of the composition of the plasma gas, power
supply, and the residence time of particles in the plasma
jet. In the case if restoration of a damaged or a used
coating is needed, it can be easily done without changing
the characteristics and dimensions of the already formed
coating.

Spray Technology, Volume 10(2) June 2001 p. 267-284

[9] Yang, K., Liu, M., Zhou, K., Deng, C., Recent
Developments in the Research of Splat Formation
Process in Thermal Spraying , Hindawi Publishing
Corporation - Journal of Materials, Volume 2013,
Article ID 260758, 14 pages
[10] http://sciencelearn.org.nz/Contexts/Gases-and
Plasmas/Looking-Closer/Plasma-spray-coating
25.02.2016
[11] Mostaghimi, J., Chandra, S., Splat formation in
plasma spray coating process, Pure Appl. Chem.,
Vol. 74, No. 3, pp. 441445, 2002
[12] Mostaghimi, J., Pasandideh-Fard, M., Chandra, S.,
Dynamics of Splat Formation in Plasma Spray
Coating Process, Plasma Chemistry and Plasma
Processing, Vol. 22, No. 1, March 2002
[13] Govindarajan, S., Dusane, R. O., Vishwanath Joshi
S., In situ Particle Generation and Splat Formation
During Solution Precursor Plasma Spraying of
Yttria-Stabilized Zirconia Coatings, J. Am. Ceram.
Soc., 19 (2011)

It is important to understand that plasma spraying process


must be analyzed by three separated but interrelated
processes. In any of the three phases are occurring
processes that are important for the formation of a
coating.
Future developments plasma spray process, undoubtedly
highlighted by a further increase in the rate of
technological innovation, will probably include improved
676

UNDERSTANDINGPLASMASPRAYINGPROCESSANDAPPLICATIONINDEFENSEINDUSTRY

[14] Paredes, R.S.C., Amico, S. C., dOliveira, A.S.C.M.,


The effect of roughness and pre-heating of the
substrate on the morphology of aluminium coatings
deposited by thermal spraying, Surface & Coatings
Technology 200 (2006) 3049 3055
[15] Wang, W. Z., Li, C. J., Wang, Y. Y., Effect of Spray
Distance on the Mechanical Properties of Plasma

OTEH2016

Sprayed Ni-45Cr Coatings, Materials Transactions,


Vol. 47, No. 7 (2006) pp. 1643 1648
[16] Wang, X., Heberlein, J., Pfender, E., Gerberich, W.,
Effect of Nozzle Configuration, Gas Pressure, and
Gas Type on Coating Properties in Wire Arc Spray,
JTTEE5 8:565-575, 1999

677

THERMAL STABILITY AND MICROSTRUCTURAL CHANGES INDUCED


BY ANNEALING IN NANOCRYSTALLINE Fe72Cu1V4Si15B8 ALLOY
RADOSLAV SURLA
Faculty of Technical Sciences, University of Kragujevac, Svetog Save 66, aak, Serbia, ekorade@gmail.com
MILICA VASI
Faculty of Physical Chemistry, University of Belgrade, Studentski trg 12-16, Belgrade, Serbia, mvasic@ffh.bg.ac.rs
NEBOJA MITROVI
Faculty of Technical Sciences, University of Kragujevac, Svetog Save 66, aak, Serbia, nebojsa.mitrovic@ftn.kg.ac.rs
LJUBICA RADOVI
Military Technical Institute, Ratka Resanovia 1, Belgrade, Serbia, ljubica.radovic@vti.vs.rs
LJUBICA TOTOVSKI
Military Technical Institute, Ratka Resanovia 1, Belgrade, Serbia, ljtotovski@gmail.com
DRAGICA MINI
Faculty of Physical Chemistry, University of Belgrade, Studentski trg 12-16, Belgrade, Serbia, dminic@ffh.bg.ac.rs

Abstract: Fe-based nanocrystalline alloys have been attracting great scientific interest due to their numerous advantages
over the ones with completely amorphous or completely crystalline structure, and potential application in various fields,
including electronics. The functional properties of these materials are connected with their microstructure. In order to get
some information on thermally induced structural transformations of nanocrystalline Fe72Cu1V4Si15B8 alloy, DTA curve was
taken. Detected exo maxima suggested the occurrence of microstructural transformations at temperatures above 480 oC.
XRD analysis revealed the presence of -Fe(Si) and Fe23B6 crystals in amorphous matrix of as-prepared alloy, while the
formation of Fe2B phase and transformation of the metastable Fe23B6 phase into the stable -Fe(Si) and Fe2B phases were
observed in samples annealed above 480 oC. Thermal treatment caused changes in morphology of the alloy including grain
growth. In addition, the EDS analysis of the polished thermally treated alloy samples showed the presence of small amounts
of V-rich phase, which indicates that certain amount of V was dissolved in the -Fe(Si) solid solution, forming Fe2VSi. After
annealing at 700 oC, separation of Cu-rich crystals was observed as well, which probably correspond to fcc -Cu phase.
Keywords: thermal stability, alloy, nanocrystalline, microstructure.
initial alloy and an appropriate thermal treatment. The first
nanocrystalline alloy with soft magnetic properties, obtained
by partial crystallization of the amorphous precursor, was
Fe73.5Cu1Nb3Si13.5B9 alloy, with commercial name
FINEMET [4]. The presence of Cu and Nb elements in this
alloy is very important for the creation of nanocrystalline
structure, since Cu atoms increase the nucleation rate, and
Nb slows down the crystal growth rate [9]. Partial
substitution of Nb by V, Mo, Ta and W affects crystal grain
size, whereby increase in atomic radius of substituting
element leads to smaller crystal grains [9, 10].

1. INTRODUCTION
Nanocrystalline alloys obtained from amorphous precursors
have been a focus of considerable scientific interest due to
their favorable functional properties and potential
application in various fields [1-8]. Within this group of
materials, very important are constructional Al-based alloys
and magnetically soft and magnetically hard Fe-based alloys
[1]. Basic characteristic of nanocrystals formed in an
amorphous matrix is their crystallite diameter, while optimal
volume fraction of nanocrystals determines desired
application. Thus, to produce an appropriate magnetically
hard nanocrystalline material, it is necessary to achieve full
or almost full crystallization [2], while on the other hand,
magnetically soft and constructional materials exhibit the
optimal magnetic and mechanical properties in partially
crystallized form, consisted of nanocrystals embedded in
amorphous matrix [1, 3].

In this paper, nanocrystalline Fe72Cu1V4Si15B8 alloy is


investigated as a part of our multidisciplinary study of Febased amorphous and nanostructured alloys. The research
deals with thermal stability and thermally induced
microstructural changes in the alloy, considering that
understanding of these characteristics is necessary for the
creation of materials with targeted functional properties.

Preparation of nanocrystalline alloy of desired properties


requires an adequate choice of chemical composition of the
678

THERMALSTABILITYANDMICROSTRUCTURALCHANGESINDUCEDBYANNEALINGINNANOCRYSTALLINEFe72Cu1V4Si15B8ALLOY

OTEH2016

solubility of B in bcc-Fe and the presence of Si in this phase


[11], leading to the higher relative ratio of B to Fe
concentration in the vicinity of the -Fe(Si) grains. When
interphase boundaries between amorphous phase and
crystalline -Fe(Si) phase become enough enriched in
boron, favorable conditions for crystallization of boroncontaining phases occur.

2. EXPERIMENTAL PROCEDURE
Preparation of ribbon-shaped samples of nanocrystalline
Fe72Cu1V4Si15B8 alloy included rapid quenching of the melt
on a cold rotating disc (melt-spinning method). The
obtained alloy samples were 55 m thick.
DTA measurement was carried out using TA SDT 2960
instrument, in a stream of helium, at constant heating rate
(20 oCmin-1). XRD patterns were recorded on a Rigaku
SmartLab diffractometer, with a Cu K radiation. SEM-EDS
analysis was conducted on SEM JEOL JSM-6610LV
microscope, equipped with an energy dispersive X-ray
spectrometer. XRD and SEM-EDS analysis were performed
on the as-prepared and thermally treated alloy samples,
whereby the thermal treatment included isothermal
annealing at different temperatures, for 60 min, in a nitrogen
atmosphere. The measurements were performed at room
temperature.

3. RESULTS AND DISCUSSION


DTA scan of nanocrystalline Fe72Cu1V4Si15B8 alloy, in the
range 25-800 oC, suggests that thermally induced structural
transformations occur at temperatures above 480 oC, which
yields two clearly observable exothermic peaks, whose
maxima are located at 515 oC and 630 oC, as it is shown on
Picture 1.

Picture 2. XRD patterns of the alloy annealed at 700 oC


for 180 min and the as-prepared alloy (inset).
Analysis of the XRD pattern of the sample annealed at
450oC for 60 min reveals that it is still partially amorphous,
while the weight fractions of -Fe(Si) and Fe23B6 crystalline
phases are almost the same as in the as-prepared alloy.
Nevertheless, annealing at this temperature leads to
formation of small amount of another boron-containing
crystalline phase, Fe2B (PDF#75-1062) (around 4 %, wt.).
After annealing at 550 oC for 60 min, the alloy becomes
fully crystalline, while the peaks of the metastable Fe23B6
phase completely disappear from diffractogram, because
this phase is totally transformed into the stable -Fe(Si) and
Fe2B phases. Annealing at 700 oC for 180 min yields the
final crystallization products, -Fe(Si) and Fe2B phases, in
amounts of 91.6 % and 8.4 % (wt.), respectively, Picture 2.
These two stable crystalline phases are the most frequent
final products of crystallization in Fe-Si-B amorphous and
nanocrystalline alloys containing around 70-80 % (at.) Fe,
while the weight fractions of individual phases depend on
the percentage of each element in the alloy [12, 13].

Picture 1. DTA curve of nanocrystalline Fe72Cu1V4Si15B8


alloy, measured at 20 oCmin-1.

Further, it can be observed that the XRD peaks of -Fe(Si)


phase are positioned at higher 2 angles after annealing at
the highest temperature for 180 min, than after annealing at
lower temperatures for 60 min, which indicates that the
crystalline lattice becomes smaller containing higher
amount of dissolved Si with prolonged high-temperature
treatment.

In order to examine mechanism of thermally induced


microstructural transformations, XRD measurements were
performed on the as-prepared and isothermally annealed
alloy samples. The as-prepared alloy exhibits halo peak at
2 angle of 45o, superimposed with several sharp maxima,
which means that the alloy is composed of both amorphous
and crystalline phases, Picture 2, inset. Further analysis
shows that around 80 % (wt.) of crystalline fraction belongs
to -Fe(Si) (PDF#35-0519) phase, while the rest (around 20
%) is metastable Fe23B6 (ICSD#54786) phase. Generally, in
amorphous Fe-Si-B systems with Fe as the dominant
component, crystallization starts with the formation of Fe(Si) phase, which represents solid solution of Si in bcc-Fe
[8, 9]. During the formation of the -Fe(Si) grains, the
amount of Fe in amorphous matrix decreases, while B atoms
are repulsed out of the -Fe(Si) grains due to the low

According to the results of the XRD analysis, the first DTA


peak can be attributed to crystallization, including the
formation of new crystalline grains from amorphous matrix,
the growth of the already present crystalline grains and the
transformation of Fe23B6 into -Fe(Si) and Fe2B phases,
while the second DTA peak corresponds to recrystallization
process, during which further ordering in the formed
crystalline phases occurs, with elimination of defects and
dissolution of the rest of Si in -Fe(Si).

679

THERMALSTABILITYANDMICROSTRUCTURALCHANGESINDUCEDBYANNEALINGINNANOCRYSTALLINEFe72Cu1V4Si15B8ALLOY

Table I. Average crystallite size of -Fe(Si) phase.


Annealing conditions
-- (as-prepared)
450 oC, 60 min
550 oC, 60 min
700 oC, 180 min

OTEH2016

significantly from that of the as-prepared alloy, Picture 3b.


On the other hand, surface of the alloy sample annealed at
700oC for 60 min shows the presence of crystalline grains
0.3-5 m in size, which differ in shape, including the round
grains, the small spiky ones, and the ones with irregular
shapes, probably formed by the coalescence of neighboring
grains, Picture 3c. EDS analysis reveals high oxygen
content in the crystalline grains on the alloy surface,
suggesting that the surface is rich in corrosion products,
which originate from exposure of the alloy samples to air
during handling with the samples. This is a consequence of
not totally amorphous as-prepared structure, since the
partially crystallized Fe-based alloys are more prone to
corrosion than the amorphous ones [16], which are
considered to have relatively high corrosion resistance.
Therefore, for further examination, the alloy samples were
polished.

Average crystallite size (nm)


39 2
48 8
52 2
107 5

Average crystallite size of -Fe(Si), as an important


parameter influencing the functional properties of the alloy,
was determined from the XRD diffractograms using
Williamson-Hall method [14]. It can be observed that the
average crystallite size of -Fe(Si) phase, amounting to
around 40-50 nm, changes very little with annealing up to
temperatures higher than 550 oC, Table 1, which suggests
that faster crystallite growth occurs during recrystallization
process. Slower crystallite growth in the region of lower
temperatures can result from the presence of V element in
the alloy, that is believed to prevent crystal growth, when
present in small amounts [15].

Picture 4. SEM backscattered electron images of the


alloy annealed at 450 oC (polished sample) (a),
and 700 oC (b) for 60 min.

Picture 3. SEM secondary electron images of the asprepared alloy (a), and the alloy annealed at 300oC (b)
and 700oC (c) for 60 min.

Picture 4a displays the SEM backscattered electron image of


the polished alloy sample annealed at 450 oC for 60 min.
The shadows on the image represent hollows, formed during
polishing, while the appearance of certain black spots
indicates that chemical composition at these places is
different from the surrounding one. High vanadium content
in these spots (25-30 % at.), detected by EDS analysis, and
suggests the formation of V-rich phase, most probably
Fe2VSi solid solution. However, only a small part of the
total amount of vanadium forms this phase, while the rest is
uniformly distributed in the bulk of the alloy, which is
shown by vanadium content of around 4 % (at.) at different
spots of the bulk (equal to the overall fraction of V in the
alloy). No changes in vanadium distribution were observed
after annealing at higher temperatures. The Fe2VSi could
not be observed using XRD because of the small quantity
and the closeness of its peaks to the peaks of the -Fe(Si).

In order to get more information on microstructure and


surface morphology of the as-cast and thermally treated
alloy samples, SEM-EDS analysis was performed. The ascast alloy exhibits relatively smooth surface, with visible
dark spots, 0.5-5 m in diameter, indicating surface
corrosion, Picture 3a. After annealing at 300oC for 60 min,
no thermally induced microstructural changes can be
observed, and surface morphology does not differ

After annealing at 700oC for 60 min, spots up to 150 nm in


size, lighter than their surrounding, appear in the SEM
backscattered electron image, Picture 4b, and are found to
be Cu-rich. In the iron-based amorphous and nanocrystalline
alloys, the role of copper is to facilitate nucleation of the Fe(Si) phase. Actually, Cu atoms create clusters with fcclike short range structure prior to the onset of crystallization
[17]. These clusters serve as heterogeneous nucleation sites
680

THERMALSTABILITYANDMICROSTRUCTURALCHANGESINDUCEDBYANNEALINGINNANOCRYSTALLINEFe72Cu1V4Si15B8ALLOY

for the -Fe(Si) phase because of good matching between (1


1 1)fcc-Cu and (0 1 1)bcc-Fe, leading to decreased interfacial
energy when the -Fe(Si) nucleates on the cluster surface
[17]. The -Fe(Si) crystals are formed in direct contact with
the Cu clusters, but do not wrap them completely. The
content of Cu in the clusters is much lower than the
equilibrium value for the fcc -Cu [17], but it can grow with
further heat treatment. Accordingly, small Cu-rich crystals,
observed in the alloy after annealing at 700oC for 60 min,
probably correspond to the fcc -Cu phase, which could not
be detected using XRD due to its low weight fraction.

OTEH2016

[5] Tavoosi,M.:
Preparation
of
nanocrystalline
Al3Ti/Al13(Fe,Ni)4 intermetallic compound by means of
melt-spinning and subsequent annealing, Journal of
Alloys and Compounds, 688 (2016) 808-815.
[6] Chiriac,H., Corodeanu,S., Donac,A., Dobrea,V.,
Ababei,G., Stoian,G., Lostun,M., Ovari,T.A., Lupu,N.:
Influence of cold drawing on the magnetic properties
and giant magneto-impedance response of FINEMET
nanocrystalline wires, Journal of Applied Physics, 117
(2015) 17A314.
[7] Mitrovi,N.S., Djuki,S.R., Djuri,S.B.: Crystallization
of the Fe-Cu-M-Si-B (M=Nb, V) Amorphous Alloys
by Direct-Current Joule Heating, IEEE Transactions
on Magnetics, 36 (2000) 3858-3862.
[8] Mitrovi,N.: Magnetoresistance of the Fe72Cu1V3Si16B8
amorphous alloy annealed by direct current Joule
Heating, Journal of Magnetism and Magnetic
Materials, 262 (2003) 302-307.
[9] Martienssen,W., Warlimont,H.: Handbook of
Condensed Matter and Materials Data, Springer,
Dresden, 2005.
[10] Borrego,J.M., Conde,C.F., Millan,M., Conde,A.,
Capitan,M.J., Joulaud,J.L.: Nanocrystallization in
Fe73.5Si13.5B9Cu1Nb1X2 (X=Nb, Mo and V) alloys
studied
by
X-ray
synchrotron
radiation,
Nanostructured Materials, 10 (1998) 575-583.
[11] Qin,J., Gu,T., Yang,L., Bian,X.: Study on the structural
relationship between the liquid and amorphous
Fe78Si9B13 alloys by ab initio molecular dynamics
simulation, Applied Physics Letters, 90 (2007) 201909.
[12] Blagojevi,V.A., Mini,D.M., ak,T., Mini,D.M.:
Influence of thermal treatment on structure and
microhardness of Fe75Ni2Si8B13C2 amorphous alloy,
Intermetallics, 19 (2011) 1780-1785.
[13] Blagojevi,V.A., Mini,D.M., Vasi,M., Mini,D.M.:
Thermally induced structural transformations and
their effect on functional properties of
Fe89.8Ni1.5Si5.2B3C0.5 amorphous alloy, Materials
Chemistry and Physics, 142 (2013) 207-212.
[14] Williamson,G.K., Hall, W.K.: X-ray line broadening
from filed aluminium and wolfram, Acta Metallurgica,
1 (1953) 22-31.
[15] Yoshizawa,Y., Yamauchi,K.: Magnetic properties of
Fe-Cu-M-Si-B (M=Cr, V, Mo, Nb, Ta, W) alloys,
Materials Science and Engineering: A, 133 (1991) 176179.
[16] Souza,C.A.C., Oliveira,M.F., May,J.E., Botta,W.J.,
Mariano,N.A., Kuri,S.E., Kiminami,C.S.: Corrosion
resistance of amorphous and nanocrystalline Fe-M-B
(M=Zr, Nb) alloys, Journal of Non-Crystalline Solids,
273 (2000) 282-288.
[17] Hono,K., Ping,D.H., Ohnuma,M., Onodera,H.: Cu
clustering and Si partitioning in the early
crystallization stage of a Fe73.5Si13.5B9Nb3Cu1
amorphous alloy, Acta Materialia, 47 (1999) 9971006.

4. CONCLUSION
Thermally induced microstructural changes in
alloy
include
nanocrystalline
Fe72Cu1V4Si15B8
crystallization and/or further growth of -Fe(Si), Fe23B6 and
Fe2B grains, transformation of Fe23B6 into -Fe(Si) and
Fe2B phases, and the recrystallization during which the Fe(Si) becomes richer in Si. More details on the thermally
induced microstructural transformations were obtained by
SEM-EDS analysis which suggested the formation of
Fe2VSi and -Cu phases as well. In addition, it was shown
that the surface of the alloy samples is prone to corrosion as
a result of partially crystallized structure of the as-prepared
alloy. Average crystallite size of -Fe(Si), important for the
functional properties of the alloy, exhibited only slight
changes in the temperature region 25-550 oC, while the
faster crystallite growth can be related to the
recrystallization process.

ACKNOWLEDGMENTS
This research was supported by the Ministry of Education,
Science and Technological Development of the Republic of
Serbia, under the projects OI 172057 and OI 172015 .
The authors would like to thank Prof. Slavko Mentus
(Faculty of Physical Chemistry, University of Belgrade) for
performing DTA measurements, and Bojana Milievi and
Jelena Papan (Vina Institute of Nuclear Sciences,
University of Belgrade) for collecting XRD data.

References
[1] Kulik,T.: Nanocrystallization of metallic glasses,
Journal of Non-Crystalline Solids, 287 (2001) 145-161.
[2] Inoue,A., Takeuchi,A., Makino,A., Masumoto,T.: Hard
magnetic properties of nanocrystalline Fe-rich Fe-NdB alloys prepared by partial crystallization of
amorphous phase, Materials Transactions JIM, 36
(1995) 962-971.
[3] Kim,Y.H., Hiraga,K., Inoue,A., Masumoto,T., Jo,H.H.:
Crystallization and high mechanical strength of Albased amorphous alloy, Materials Transactions JIM,
35 (1994) 293-302.
[4] Yoshizawa,Y., Oguma,S., Yamauchi,K.: New Fe-based
soft magnetic alloys composed of ultrafine grain
structure, Journal of Applied Physics, 64 (1988) 60446046.

681

LOW LEVEL TRITIUM DETERMINATION IN ENVIRONMENTAL


SAMPLES USING 1220 QUANTULUS
NEVENA ZDJELAREVI
PC Nuclear Facilities of Serbia, Belgrade, nevena.zdjelarevic@nuklearniobjekti.rs
MARIJA LEKI
PC Nuclear Facilities of Serbia, Belgrade, marija.lekic@nuklearniobjekti.rs
NATAA LAZAREVI
PC Nuclear Facilities of Serbia, Belgrade, natasa.lazarevic@nuklearniobjekti.rs

Abstract: The main aim of this paper was to determine tritium concentrations in environmental samples. The used method
was liquid scintillation counting. Surface water samples were taken from Mlaka creek and Danube River on different
locations near PC Nuclear Facilities of Serbia. Precepitation was collected during July, 2016 on PC Nuclear Facilities
of Serbia site. Tritium concentrations ranged from 2.4 to 14.8 Bq/l for surface water samples and 5.9 Bq/l for
precepitation, which is consistant with annual tritium concentrations.
Keywords: tritium, scintilation counting, environmental samples.
significant amounts of short-term tropospheric tritium
fallout were detected in the SW Pacific area after 1968, but
long-term stratospheric fallout has shown a marginal
increase in 1969-70 super imposed on the very slow
decrease observed since 1965 [5].

1. INTRODUCTION
Tritium, radioactive or unstable isotope of hydrogen with a
half-life of 12.32 0.02 year ( 4500 8 days), decays to
3He emitting a beta particle with maximum energy of 18.6
keV [1]. Tritium is a naturally occurring radionuclide,
produced mainly from interactions between cosmic-ray
neutrons and nitrogen in the upper atmosphere [2], via the
reaction: 14N(n,T)12C. The largest anthropogenic source
was atmospheric nuclear testing between 1952 and 1969,
which disturbed the natural levels of tritium. It is estimated
that a total of 1.67 1020 Bq of 3H was injected via
atmospheric nuclear weapons testing [3]. The artificial
production of tritium has increased with the increase in
nuclear weapons testing involving high-yield thermonuclear
reactions. This, together with the fact that tritium is
produced in nuclear power plants, wich is significant source
of tritium in the environment, has contributed considerably
to the accumulation of tritium in the atmosphere. Since the
beginning of thermonuclear weapon tests, a large quantity of
"artificial" tritium has been released into the atmosphere,
and together with contribution from the neutron irradiation
of nitrogen, has resulted in the rise of tritium content of rain
water from a level of between 0.5 and 5.0 tritium atoms per
1018 hydrogen atoms to as high as 500 tritium atoms per
1018 hydrogen atoms [4]. Once released into the
atmosphere, tritium can be incorporated into plants or
deposited to soil and incorporated into soil moisture,
because tritium has the same chemical behaviour as
hydrogen and migrate with water. After hydrogen bombs
(25 August, 2MT; 9 September, 1MT) were detonated for
the first time in the Southern Hemisphere during the 1968
French test series at islands in the Tuamoto Archipelago, no

According to the European Commission, the upper limit for


tritium in water is 100 Bq/l [6]. This value is not based on
health effects relative to its consumption but more as a
monitoring value. A tritium activity of 100 Bq/l could
indicate that leakage or a release occured on a power plant
or that there has been some illegal nuclear testing, so further
analysis have to be taken to check if other radio- nuclides
are present in water.
For this reason the need for determination of tritium
concentrations in various environmental samples around the
PC Nuclear Facilities of Serbia (PC NFS) to estimate the
dispersion trend of tritium and to obtain the baseline data on
the present level of tritium, which would be valuable for
estimating the environmental impact of operation of nuclear
facilities or possible maliceus nuclear weapon testing. The
spatial and temporal distributions of tritium in precipitation,
river water, atmospheric water vapor, and air were observed
in the Belgrade area from 1976 [7]. The annual mean release
of tritium into the environment from heavy water reactor
operations at the Vina Institute based on precipitation
measurements was estimated to be about 80 TBq.
In this paper, we used liquid scintillation counting for
determination of tritium concentrations in samples from
different locations along the course of Mlaka creek and
Danube River, and in monthly precepitation collected at PC
NFS
site.

682

OTEH2016

LOWLEVELTRITIUMDETERMINATIONINENVIRONMENTALSAMPLESUSING1220QUANTULUS

2. EXPERIMENTAL PROCEDURE

using low level tritium water (deep well

2.1. Equipment

water) and a comercial 3H Water Internal Standard from


Perkin Elmer. Preparation of background and standard was
the same as the preparation of the samples.

Laboratory for environmental monitoring in PC NFS is


equiped with low background liquid scintillation system
detector Quantulus 1220 by Perkin Elmer, Finland. The
1220 Quantulus is a dedicated environmental spectrometry
system for measurement of extremely low concentrations of
beta and alpha radiation. Unique anticoincidence guard
detector and lead radiation shield provide active and passive
shielding of environmental radiatiation which stop most of
gamma and all cosmic radiation. The passive shield is
assimetric and is made of low radioactivity lead that
surrounds the detector assembly. Also, the head of the
piston is made of copper and is a part of passive shielding.
The active shielding, which is necessary to remove natural
background fluctuations, is enforced by an asymmetric
guard detector which is filled with mineral oil scintillator.
The system is provided with two pulse analysis circuits:
pulse shape analysis (PSA) and pulse amplitude comparator
(PAC). PSA alows simultaneous acquisition and sensitive
discrimination of pure alpha and beta spectra from mixted
radiation in a sample. PAC is used to decrease the
component of background due to optical crosstalk in liquid
scintilation counting. The 1220 Quantulus has two
multichannel analyzers (MCAs), one for active shield and
one for recording of spectra. In order to determinate the
counting efficiency, i.e. the absolute activity of the sample,
it is necessary to measure the level of quench of the sample
first. The quench calibration curve is made by using the
external standard (a build-in gamma source, 152Eu) [8].

Picture 1. Sampling locations


For the measurement we used standard 20 ml polyethylene
vials.
Counting efficiency () was calculated as:

RDWTS Rb
ADWTS

(1)

were RDWTS is distilled water tritium standard count rate


(s-1), Rb is the background count rate (s-1) and ADWTS is the
activity of distilled water tritium standard in becquerels
(Bq).
Recovery correction factor, F, is obtained using the
following equation:
F=

2.2. Sample Treatment and Measurement


Water samples were taken from three different locations along
the Mlaka creek, two locations along the Danube River and
monthly precipitation at the PC NFS site. Mlaka creek flows
along the nuclear facilities boundery and confluence to the
Boleica stream, which further confluence to the Danube
River. The first sampling point is at Mlaka creek before any
influence of the nuclear facilities and the other two are on the
PC NFS site. Fourth and fifth sampling points are along the
Danube River, 1 km distant from the confluence of the
Boleica stream to the Danube, upstream and downstream,
respectively (Picture 1). Precipitation were collected in raingauges at local station in PC NFS during July, 2016. The first
three sampling points were chosen because of the influnce of
the PC NFS, and the sampling points on the Danube River
were chosen also because of the influence of the neighbor
power plants (i.e. PWR Paks, Hungary).
All samples were prepared according to ASTM D4107-08
method [9]. Each water sample was filtered through slow
depth Whatman filter, distilled, and than mixed with the
scintillation cocktail UltimaGold LLT (8 ml of sample with
12 ml of cocktail). In this test method samples were treated
with sodium hydroxide and potassium permanganat. The
alkaline treatmant is used to prevent other radionuclides
from distilling over with tritium, and the permanganat
treatmant is used to oxidize organics in the samples which
could distill over and cause quenching. As a background
sample a distilled deep well water was used. Tritium
standard solution cointaining 15.9 Bq/ml was prepared

RDRWTS Rb
ARWTS

(2)

RDRWTS is count rate of distilled raw water standard (s-1), and


ARWTS is the activity of undistilled raw water tritium
standard (Bq).
Then, tritium activity concentracion, A, of the sample was
calculated as:
A=

Ra Rb

F V et

(3)

where Ra is sample gross count rate (s-1), Rb is the background


count rate (s-1), is detection efficiency (determent in Eq 1), F
is the recovery factor (determened in Eq 2), V is the volume of
the sample (l), is the decay constant for tritium (days-1), t1/2 is
the half-life of tritium (days) and t is elapsed time between
sampling and counting (days).
Minimum detectable activity concentration was calculated
as follows:
t
2.71 + 3.29 Rb ta 1 + a
tb

MDA =
t
ta F V e

(4)

where ta is counting time of sample (s) and tb is counting


time of background (s).

3. RESULTS AND DISCUSION


In this paper we measured tritium concentrations in five
different water samples and one precepitation sample, all
sampled in the same month (July,2016). All samples were
distilled in tree series and the avarage value of the activity
683

LOWLEVELTRITIUMDETERMINATIONINENVIRONMENTALSAMPLESUSING1220QUANTULUS

was calculated. After the destillation of samples, they were


mixed with scintillation cocktail and dark adapted for 9
hours, before the measuring.

summer maximum from May to July, corresponding to the


effetct of intense mixing across the troposphere during
spring time. Our results showed good complince with this
statement. Results for tritium concentration in percipatation
at PC NFS are slightly larger than in percepitation at Zeleno
Brdo (2.63 0.24 Bq/l) stated in [10], which can be
attributed to the influence of heavy water nuclear reactor
RA located in PC NFS.

The optimal counting widow for tritium was determined to


be between channels 1 and 260.
In Picture 2, dependance of MDA from total couting time
(when counting time of the sample and the background is
the same) is shown. In this paper we used total counting
time of 300 minutes for samples and background and got
MDA value of 2.4 Bq/l. The determined detection efficiency
was (36 1) %.

5. CONCLUSION
In this paper we presented the obtained results for tritium
activity measurements in water samples from Mlaka creek,
Danube River and in monthly precipitation collected at PC
NFS site. Tritium concentrations ranged from 2.4 to 14.8
Bq/l for surface water samples and 5.9 Bq/l for precipitation
in PC NFS, which is consistent with annual tritium
concentrations.

8
7

MDA (Bq/l)

6
5

Although the stated concentrations are far bellow the upper


limit for tritium in water, it is essential to continuously
monitor tritium level in surface waters and precipitation in
order to detect the possible increase in environmental
tritium concentration due to impact of nuclear facilities or
malicious nuclear weapon testing.

4
3
2
1
0

200

400

600

800

OTEH2016

1000

t (min)

References

Picture 2. MDA values for different counting times

[1] Lucas,L.L., Unterweger,M.P.: Comprehensive review


and critical evaluation of the half-life of tritium, J. Res.
Natl. Inst. Stand. Technol., 105 (2000), 541549.
[2] Madruga,M.J., Sequeria,M.M., Gomes,A.R.: 2008.
Determination of Tritium in Waters by Liquid
Scintillation Counting, LSC 2008, Advanes in Liquid
Scintillation Spectrometry. Eikenberg, J., Jaggi, M.,
Beer, H., Baehrle, H. (Eds.), 353359.
[3] UNSCEAR, Sources and Effects of Ionizing Radiation,
United Nations Publications, Vienna, Austria, 1977.
[4] Von Butler,H., Libby,W.F.J. Inorg. Nucl. Chem. 1955,
1,118.
[5] Taylor,C.B.: Influence of 1968 French thermonuclear
tests on tritium fallout in the Southern Hemisphere,
Earth and Planetary Science Letters, 10, Issue 2,
(1971), 196-198.
[6] European Commission, 1998. European Drinking
Water Directive 98/83/EC of 3 November 1998 on the
Quality of Water Intended for Human Consumption.
Official Journal Legislation 330.
[7] Miljevic,N., Sipka,V., Zunjic,A., Golobocanin,D.: Tritium
around the Vinca Institute of Nuclear Sciences, Journal of
Environmental Radioactivity 48 (2000) 303-315.
[8] Wallac 1220 Quantulus, Instrument Manual, Perkin
Elmer, 2005
[9] ASTM International, Standard Test Method for Tritium
in Drinking Water, D410-08
[10] Nikolov,J., Todorovi,N., Jankovic,M., Vostinar,M.,
Bikit,I.: Different Method for Tritium Determination in
Surface Water by LSC, Applied Radiation and Isotopes
71 (2013) 51-56.
[11] Varlam,C., Stefanescu,I., Vagner,I., Fcaurescu,I.:
Tritium Level Along Romanian Danube River Sector,
Proceedings of Third European IRPA Congress (2010).

The SQP (E) value for each of the measured samples,


including background and standards, varied around 779 with
a veriation coefficient not more than 1%.
The results obtained for 5 water samples taken from Mlaka
creek and Danube River are shown in Table 1 and the
results for tritium concentration in monthly precipitation
collected at PC NFS site are shown in Table 2.
Table 1. Tritium concentrations in water samples
3H activity concentrations
No
Sample name
[Bq/l]
1.
Mlaka creek point 1
MDA
2.
Mlaka creek point 2
12.2 0.8
3.
Mlaka creek point 3
14.8 0.8
4. Danube River point 1
11.8 0.8
5. Danube River point 2
12.2 0.8
Tritium concentrations in Mlaka creek and Danube River
ranged from MDA values to 14.8 Bq/l. Value of tritium
concentration on the first sampling point at Mlaka creek
(which is before the influence of nuclear facilities) was
below the MDA value, and other four water samples have
3
H concentrations slighty larger. All obtained results in
these surface waters are in accordance with annual tritium
concentrations reffered in [10] and [11].
Tabele 2. Tritium concentrations in precepitation collected
at PC NFS site.
3H activity concentrations
No
Sample name
[Bq/l]
1. precepitation, July 2016
5.9 0.7
Concentration of tritium in precepitation collected during
July, 2016 at PC NFS site was 5.9 0.7 Bq/l. In reference
[7] it is stated that the annual mean tritium concentrations in
precipitation range from 2.2 to 35.4 Bq/l, with distinct
684

OPTIMIZATION AND VIRTUAL QUALITY CONTROL OF A CASTING


SREKO MANASIJEVI
Lola Institute, Belegrade, srecko.manasijevic@li.rs
RADOMIR RADIA
Lola Institute, Belegrade, radomir.radisa@li.rs
JANEZ PRISTRAVEC
Exoterm-it, Kranj, Slovenia, www.exoterm.si
VELIMIR KOMADINI
Lola Institute, Belegrade, velimir.komadinic@li.rs
ZORAN RADOSAVLJEVI
Lola Institute, Belegrade, zoran.radosavljevic@li.rs

Abstract: The paper shows the application of modern information technology to the virtual optimization and the quality
control of castings in several samples. The MAGMA5 software package was used to optimize the relevant technological
casting parameters and for analysing the quality of castings. It has been shown from the obtained results that potential
problems can be easily identified and eliminated at the design stage of castings, which enables the product and tool designer
and engineer to optimize all relevant parameters of the casting process.
Keywords, MAGMA5, CAE, casting process simulatio, optimization, quality contol.
By using contemporary techniques (CAD/CAM/CAE) of
product design and contemporary techniques for computer
simulation of casting process shortens time for cast
development and its price. Current wide knowledge of the
process enables foreseeing impact of relevant technological
parameters of the casting and solidification of the casts. This
results modeling of the solidification process, development
of program for computer simulation of the casting
simulating events from the very process and graphically
displays results. Computer simulation of casting proves is a
description of the actual state using a logical mathematicalphysical model. All physical laws under which this process
unfolds with margin conditions are entered into a
mathematical model processed by a computer.

1. INTRODUCTION
Foundry, as a vital branch of the industry is currently facing
many challenges. On one side, founders have to satisfy
arising expectations of the buyers in terms of securing
quality, shorter time for product realization, smaller-scale
series, lower and more competitive prices. On the other
hand, the foundries are losing touch in terms of fast
technological and managerial changes in manufacturing
sectors. This dance of founders skills and hardly
predictable nature of the cast metal is a walk on the fine line
separating success from failure, good cast from the one that
needs to be returned to the furnace [1].
Increase of competition and survival in the worlds market is
visible in speed of technical innovation transfer. This is the
reason why is necessary to launch quality new products,
produced in the shortest possible period and will smaller
costs. Shortening time for replacing new products in the
market is followed by increased demands for functionality,
design, economy, etc. This means that the following
requests are set before the industry: shortening development
time, reducing manufacturing expenditures, quality
improvement, increasing safety, improving design and
functionality and meeting ecological standards.

Simulation of casting process has been used in foundries for


30 years. In time, software packages were very developed
and encompass every segment of a casting. Nowadays, there
are several software packages for simulation of casting and
solidification (MAGMA5, ProCast, Access, etc.). They read
graphic formats such as .stl and .step. Creating 3D geometry
model is made by using CAD software packages exporting
into these formats [24].

Today we have primary use of NC processing machines that


require dimensionally stable casts with uniform hardness of
the surface for preventing damages of cutting tools.
Comparing current condition with the one ten years ago it is
noticeable that most countries reduce number of foundries
(not only in our country, but in wider region), which is
compensated by adequate increase of manufacturing scope.

2. MATERIALS AND METHODS

Goal of this paper is to use specific cases for showing


advantages of using software packages for virtual quality
analysis of the casts using CAE techniques.

MAGMA GmbH is research & development center in


Germany that has been known to all foundries in the world
for some time now with its software package MAGMA5.
685

OTEH2016

OPTIMIZATIONANDVIRTUALQUALITYCONTROLOFACASTING

This research center has developed an overall program


package of the new generation designed for simulating
foundry processes. For simulation execution it is necessary
to provide: 3D geometric model of a cast and other
components (casting tools, firth system, feeder),
technological parameters (cast temperature, casting time,
alloy contents, etc.). MAGMA5 reads graphic formats such
as .stl and .step. This means that previously constructed 3D
models can be used (by some CAD software, Picture 1a, 1c
and 1d) and entered into graphic station MAGMA5 as
originals (Pictures 1b, 1d and 1f). There is an option of
constructing simple 3D geometry models in MAGMA5.

elements can be adjusted by a user in all three directions of


coordinate system by defining desired minimal element size.
The greater refinement, i.e. the more elements, the more
precise calculation of simulation, but with longer simulation
time. With more complex and/or thin-walled casts it is
necessary to define more refined network.
Prepared network is used for further calculation. For each
element, i.e. part of the network, differential equations are
used for calculating physical-thermal parameters, and
obtained results are margin conditions for calculating
parameters in the adjacent element. In this way, a computer
makes a calculation between the elements in 3D coordinates
and in the end, integrate all partial results in overall
geometry.

In the next phase, all geometrical assembles are divided into


partial elements. Software itself automatically generates the
network. Network refinement, i.e. number of network

a)

b)

c)

d)

e)

f)

Picture 1. a) The Buckets Pelton turbine -3D model, b) Buckets Pelton turbine -MAGMA5 c) a LED street light
housing-3D model, d) a LED street light housing -MAGMA5, e) an excavator tooth holder -3D model, f) an excavator
tooth holder -MAGMA5 [68]

686

OTEH2016

OPTIMIZATIONANDVIRTUALQUALITYCONTROLOFACASTING

More detailed analysis of porous points in the cast can be


made by using Porosity criteria, with visible porosity and
lunkers, phase structures, size and strain allocation, etc.

3. RESULTS AND DISCUSSION


3.1. Results of the simulation of the piston casting
Casting simulation and solidification of piston casts in this
paper are presented on the most common results.
Postprocessor usage yielded the results which can, in
addition to numerical values, can be also presented via:
scale (for example: thermo scale), X-ray, vectors, points,
etc. In Picture 2 are shown vectors of casting speed in
transient point between firth system and a piston cast.
Using Solidification Results" criteria gave solidification results
(with isotherms, temperature fields, overview of liquid, pasty or
solid metal, temperature gradients, etc.). Two different
solidification times were analyzed (110 sec and 140 sec). In
Picture 3 are shown solidification results for different time
intervals, i.e. percentages of solidified metal.
Solidification simulation was shown via X-ray with a
thermal scale (results are given in [oC]). Based on given
overview of simulation results for solidification process,
potentially critical locations in the cast can be identified.

a)

Picture 2. Vectors of casting speed with a scale [6]

b)
Picture 3. Simulation of pistom solidification, a) 110 sec and b) 140 sec [6]
By using the MAGMAhpdc module, the casting process
simulation shows turbulent stream motion, resulting in the
formation of gaseous inclusions (Picture 5). The criteria
FillTracer (Picture 5a) and FillVelocity (Picture 5b)
confirms the tendency of making the above errors in a
designated area.

3.2. Results of simulation the LED street light


housing
Other analyzed case is LED street light housing. For
example, FillTemp showing the arrangement of metal
temperature during mould filling. Using this criterion leads
to the first conclusion that the design of gating system is not
good.

The examples of simulation results for the criteria Air Back


Pressure and Air Entrapment are presented in Picture 5c.
Due to the priority filling on the edges, front filling is
encountered in the middle of cast. The positions of trapped
gas correspond to the positions of errors in real-cast.

Due to the long period of piston accelerating to the final


speed of filling, the initial part of liquid metal stream is
hitting the gate wall (Picture 4a), starting solidification
before it reaches the main mass. This results in a subcooling
of the gate (Picture 3b).

687

OTEH2016

OPTIMIZATIONANDVIRTUALQUALITYCONTROLOFACASTING

a)

b)
Picture 4. a) The FillTemp criterion and b) a subcooling in the gate [7]

a)

b)

c)
Picture 5. A critical zone analysis for the criteria: a) FillTraces and b) FillVelocity and c) The criterion Air Entrapment
and shown errors in cast [7]
casting is carried out, actually is a powerless for this cast
shape and design of gating system. If we adopt for the
mould closing force of machine a safety factor of 25%, then
it can be achieved a maximum pressure of 295 bar for the III
phase (Picture 7b). Therefore, the option of increasing the
piston diameter to 80 mm was analyzed. Speed of the first
phase for both piston diameters was constant: 0.2 m/s
(Picture 7c).

The highest suction air is below the beginning of die casting


ribs (Picture 6). Due to the complex-shaped die cast part and
the poor design of gating system, the liquid metal at the
beginning moves towards the upper edge of the ribs,
creating the under pressure and understaffed area of the cast
(the highlighted area in Picture 6a).
Using MAGMA5 software package leads us to the
conclusion that the casting machines by which the test

688

OTEH2016

OPTIMIZATIONANDVIRTUALQUALITYCONTROLOFACASTING

a)

b)
Picture 6. Analysis of the critical areas, of criteria: a) Air Entrapment and b) Age [7]

b)

a)

c)
Picture 7. The analysis of casting machine: a) the input data, b) the calculation, and c) the comparative overview of two
piston diameters [7]
The piston with diameter 70 and a maximal acceleration
266 m/s2, reaches the speed of the II phase at the position
250 mm. An acceleration starts when the chamber is filled
up to the top, which is 193.35 mm. The piston speed in the
II phase is 5.5 m/s, while a theoretical speed of molten metal
at the ingate is 41.6 m/s.
If we are using piston diameter 80 mm, an acceleration starts
when the chamber is filled up to the top, which is 252.34 mm.
The piston speed in the II phase is 4.2 m/s, and a theoretical
speed of molten metal at the ingate then is 41.48 m/s. This
leads us to the conclusion that the use of the piston with a
diameter 80 mm reduces the time for reaching the II phase.
689

3.3. Results of simulation the excavator driver


bearer
Third analyzed example is excavator driver bearer. With
assistance of FillTemp criteria it was immediately agreed
that design of firth and feeder is not good (Version_V02,
Picture 8a).
Adjustment of firth system and feeder led to better
allocation of temperature during firth (Version_V07, Picture
8b). Instead of one feeder were put two with insulation
shells and dimensions of firth system were changed.

OTEH2016

OPTIMIZATIONANDVIRTUALQUALITYCONTROLOFACASTING

a)

b)

Picture 8. The criterion FillTemp: a) V02 and b) V07 [8]


By using Solidification Criteria Results after calculation, TotalPorosity with Version V02, as proved in practice.
program automatically calculates several criteria that can be Picture 9b shows results for criteria Porosity for Version
used at any point. Results of these criteria helps analyzing V07. With Version V07 is visible no porosity in the cast.
This type of firth system in practice proved to yield a
solidification process and find cast deficiencies.
healthy cast.
Picture 9a shows presence of porosity by using criteria

b)

a)

Picture 9. An analysis of critical areas of the Porosity criterion: a) V02 i b) V07 [8]

5. CONCLUSION

References

Using informational technologies and virtual casting


technology, in this case a software package MAGMA5 enabled
testing different versions for optimizing technological casting
parameters. Before experimental casting for each version the
testing is done in virtual world of computers. This makes all
potential problems visible. Designer can cut a digital cast in
any plain and analyze any part for finding discrepancies
(presence of porosity, segregation, etc.), also can be analyzed
residual strains, metallurgic structure of a cast and many other
things. Only after the right solution is found, implementation of
results takes place in reality. This method of conquering new
products allow avoiding expensive and longtime experimental
researches, and the advantage is option of easy implementation
in practice for verifying technologies in foundries.

[1] Manasijevi,S., Radia,R.: Skill and intuition of


foundrymen or computer assisted casting,
LIVARSTVO, 47 (1) (2008) 1011.
[2] Radia,R., Guliija,Z., Manasijevi,S.: Process
optimization of metal casting, MACHINE DESIGN, 1
(2009) 111114.
[3] Manasijevi,S., Radia,R., Aimovi-Pavlovi,Z.,
Rai,K., Markovi,S.: Software packages for
simulation and visualization of the process of casting
pistons, LIVARSTVO, 48(1) (2009) 1420.
[4] Radia,R., Manasijevi,S., Markovi,S.: Optimization
of of metal casting technology using software tools, IX
international conference on accomplishments in
Electrical and Mechanical Engineering and
InformationDEMI 2009, 2829. May, Banja Luka,
pp.175-180, 2009.
[5] http://www.magmasoft.de/.
[6] Manasijevi,S., Radia,R., Markovi,S., PavloviAimovi,Z., Rai,K.: Optimization technological

ACKNOWLEDGEMENTS
This paper is a part of results in experimental researches
within the project financed by the Ministry of Education,
Science and Technological Development of the Republic of
Serbia.
690

OTEH2016

OPTIMIZATIONANDVIRTUALQUALITYCONTROLOFACASTING

Congress of Metallurgists of Macedonia, 29 May01


June, Ohrid, Macedonia, pp. 15, 2014.
[8] Radia,R., Manasijevi,S., Pristavec,J., Mandi,V.,
Komadini,V.: Using MAGMA5 to optimize the
parameters of casting an excavator tooth holder,
Metallurgical & Materials Engineering Congress of
South-East Europe (MME SEE 2015), 0305. June,
Belgrade, Serbia, pp. 321326, 2015.

parameters of piston casting using information


technology, 12th International Foundrymen Conference,
Sustainable Development in Foundry Materials and
Technologies, 2425th May, Opatija, Croatia, pp.
236246, 2012.
[7] Radia,R., Manasijevi,S., Pristavec,J., Mandi,V.:
Using MAGMA5 to optimize relevant technological
parameters of the process of casting the housing of
LED lamp for street lighting, VI International

691

SECTION VIII

Quality, Standardization, Metrology,


Maintenance and Exploitation

CHAIRMAN
Professor Biljana Markovi, PhD

RELIABILITY PREDICTION OF ELECTRONIC EQUIPMENT:


PROBLEMS AND EXPERIENCE
SLAVKO POKORNI
Information Technology School, Belgrade, slavko.pokorni@its.edu.rs

Abstract: The problems and experiences from about thirty years in the reliability prediction of electronic equipment
using MIL-HDBK-217 and teaching reliability and maintainability theory in higher education are discussed in this
work. Because the MIL-HDBK-217 has limitations, and has not been updated since 1995, RIAC introduced new
methodology, based on PRISM, and new MIL-HDBK-217Plus, but application of this methodology can be more time
consuming, more costly, and MIL-HDBK-217Plus is no longer free. So, for small companies it is not cost effective. But
are predictions by this new methodology always more accurate, and what if there is no enough data to applicate this
methodology? These problems are also discussed in this work. Software and human reliability, reliability of cloud
services, reliability of internet of things and reliability culture will also be mentioned.
Keywords: reliability, reliability prediction, hardware, software, human, cloud services, internet of things, culture.

1. INTRODUCTION

2. RELIABILITY PREDICTION

Reliability as a theory and practice began to develop in


the 50s of the last century. In the mid 60's military manual
for calculating reliability MIL-HDBK-217 appeared.
Even though later appeared commercial manuals, MILHDBK-217 is still used by more than 80% of engineers in
calculating reliability. By the time it has been shown that
this manual, which is essentially based on an exponential
distribution of failure, has a number of limitations, and
that other approaches are needed [1].

Reliability prediction by MIL-HDBK-217 has been done


for almost 60 years, but before about 30 years it was
identified that new approaches are needed [1]. In this
chapter we will briefly discussed limitations of MILHDBK-217 and these new approaches, such as RIACs
217PlusTM.

2.1. MIL-HDBK-217 methodology

My expierence has shown that the problem is also in


establishing the good requirements for reliability and the
inadequate cooperation of engineers who design the
equipment and engineers who calculate the equipment
reliability, especially in inadequate estimate of the input
data in the calculation of reliability [1].

MIL-HDBK-217 is a military manual for the calculation


of the reliability of electronic devices. It was developed in
1961 [1], and the first version was published in 1965. The
aim was to establish a consistent and uniform methods for
assessing the inherent reliability of military electronic
equipment and systems. MIL-HDBK-217 provides the
basis for forecasting the reliability of military electronic
equipment and systems during design and contracting. It
is extensively used for military and non-military
programs. It also provides a basis for comparison and
evaluation of the prognostic reliability of certain variants
of the project. It is designed to be a tool for increasing the
reliability of the equipment that is designed. It is
periodically updated to folow the change of technology
and innovations in design procedures. There are versions
A, B, C, D, E and F, as well as the addition of two
variants for version F (Notice Notice 1 and 2). It has not
been modified since 1995. More datails can be seen in [1].

Because software and human reliability are connected to


hardware reliability in nowdays systems, we will also
discuss these relations.

Although other industrial and commercial standards are


appeared, MIL-HDBK-217 is still used by more than 80%
of engineers doing the calculation of reliability.

We will also discuss reliability of cloud and internet of


things and so called reliability culture.

During the last 15 years of the last century, when


updating MIL-HDBK-217E was done, it was concluded
that the model of constant failure rate (it means the
exponential distribution of failures) on which it is mainly

Because the MIL-HDBK-217 has limitations, and has not


been updated since 1995, Reliability Information Analysis
Center (RIAC) introduced new methodology, based on
PRISM (software tool of Reliability Analysis Center
(RAC)) [2] and new MIL-HDBK-217Plus [2, 3], but
application of this methodology can be more time
consuming, more costly, and MIL-HDBK-217Plus is no
longer free. So, for small companies it is not cost
effective. We will discuss are predictions by this new
methodology always more accurate, and what if there is
no enough data to applicate this methodology?

New methodologies require new knowledge, so education


for reliability is also important to discussed.
695

RELIABILITYPREDICTIONOFELECTRONICEQUIPMENT:PROBLEMSANDEXPERIENCE

OTEH2016

user. The version of 217PlusTM released in 2006 contains


twelve embedded component models: capacitors,
connectors, diodes, inductors, integrated circuits,
optoelectronic devices, relays, resistors, switches,
thyristors, transformers and transistors (the PRISM
software, from 1990, contained only six).

based, is not reasonably apply to each type of element and


the system, if it is not really justified. One of the main
result was that so called physics of failure (PoF) should
be implemented
Efforts to update and revitalize the MIL-HDBK-217F
began in 2004. Version G was planned to be finished by
the end of 2009. The MIL-HDBK-217 Rev. G update is
authorized under DoD Acquisition Streamlining and
Standardization Information System (ASSIST). The
Naval Surface Warfare Center (NSWC) Crane Division is
managing the project, started in 2008.

The specific system-level reliability prediction approach


employed using 217PlusTM depends on several factors,
including [3]:
1. Whether information exists on a predecessor item: If
the item under analysis is an evolution of a predecessor
item, the field experience of the predecessor item can be
leveraged and modified to account for the differences
between the new and the predecessor items. A prediction
is performed on both items. These two predictions form
the basis of a ratio used to modify the observed failure
rate of the predecessor system.

The objective of the Rev G project was not to develop a


better, more accurate reliability prediction tool or to
produce more reliable systems. The actual goal was to
return to a common and consistent method for estimating
the inherent reliability of an eventually mature design
during acquisition so that competitive designs could be
evaluated by a common process. To support this position,
data from a NSWC Crane survey was cited showing that
the majority of respondents use 217 for reliability
prediction and that they wanted to continue to use it in its
current format. The next most used methods were PRISM,
217 PLUS, and Telcordia SR-332 [4].

2. The amount of empirical reliability data available on


the item: If enough empirical data (field, test, or both) is
available on the new system under analysis, it can be
combined with the reliability prediction on the new
system to form the best failure rate estimate possible. As
minimum a serial configuration is assumed. The result of
the component-based prediction is represented by IA, new
in Picture 1.

The 217WG developed a dual approach for integrating


PoF overstress and wearout analysis into 217 Rev. along
side improved empirical prediction methods. One
proposed PoF section addresses electronic component
issues while the second deals with Circuit Card Assembly
(CCA) issues. These sections are meant to serve as a
guide to the type of PoF models and methods that exist
for reliability assessments.

3. Whether the analyst chooses to assess the processes


used in system development: Failure rate of a product can
be further modified with the optional so called Process
Grade analysis, which will increase or decrease the
predicted item failure rate depending on the relative
robustness of the processes being considered. These
modifications are reflected in the failure rate represented
by predicted, new in Picture 1.

It is not known that version G appeared, but RIACs


Handbook of 217Plus Reliability Prediction Models
appeared in 2006.

Until 2006 The RIAC developed and published the


Handbook of 217PlusTM Reliability Prediction Models to
make available the equations and model parameters that
form the basis of the 217PlusTM methodology. It can be
purchased as a part of software or separately.

Because all of these problems, U.S. Government,


Department of Defense (DoD) recommended that not
only MIL-HDBK-217 is to be used in the calculation, and
serious engineering evaluation of the input data in the
calculation must be performed. This requires additional
knowledge of the engineers involved in the design of
devices and reliability calculations, and close cooperation
of the engineers, which in practice is not adequate, so an
eduquate education of engineers in the field of reliability
is necessary.

Obviously, the Handbook is no longer free, as former


versions of MIL-HDBK-217 were. So, for small
companies the reliability prediction is not cost effective as
before. And what else is new? Of course there is new
methodology, which looks more complex. But, if you
look at the 217PlusTM methodology (Picture 1), it can be
seen that as a minimum, the methodology is the same as
with former MIL-HDBK-217, except that MIL-HDBK217 is not updated since 1995 with up to date data about
components failure rates. Of course if we want better
reliability prediction, we must applied more complex
methodology suggested by 217PlusTM.

2.2. RIAC methodology


Because the MIL-HDBK-217 has limitations, and has not
been updated since 1995, RIAC introduced new
methodology, based on PRISM [2] and new MIL-HDBK217Plus. According to [2], RIACs 217PlusTM is a
methodology and a software tool that was developed by
the Reliability Information Analysis Center (RIAC) to aid
in the assessment of system reliability. It represents the
next generation of the PRISM software tool initially
released by the DoD-funded Reliability Analysis Center
(RAC) in 1999 (which became RIAC in 2005). The
original software contained six embedded models to
estimate the failure rate of various components when
exposed to a specific set of stresses that are defined by the

One must have in mind that 217PlusTM methodology


calculates failure rates in terms of failures per million
calendar hours, not operating hours. Therefore, user
inputs for field data or user-defined failure rates need to
be converted to a calendar hour basis prior to
incorporating them into a 217PlusTM reliability prediction.
The conversion factors are [3]:
Calendar hours = Operating hours / Duty cycle
696

RELIABILITYPREDICTIONOFELECTRONICEQUIPMENT:PROBLEMSANDEXPERIENCE

Operating hours = Calendar hours x Duty cycle

OTEH2016

3. EXPIERENCE IN RELIABILITY
PREDICTION

There is 217Plus:2015, the latest revision to the popular


217Plus(TM) Handbook of Reliability Prediction Models
based on the original MIL-HDBK-217, Reliability
Prediction of Electronic Equipment. Available exclusively
through Quanterion, 217Plus:2015 contains new failure
rate models covering Photonics components and updates
to the twelve original 217Plus component failure rate
models based on new data [5].

I have experience from about thirty years in the reliability


of electronic equipment using MIL-HDBK-217 in my
education [6], practice in reliability calculation [7, 8, 9,
10, 11], researching and publishing papers [12, 13, 14, 15,
16, 17] and teaching reliability and maintainability theory
in higher education and publishing textbooks from
reliability and maintainabliy [18, 19].

Picture 1. 217PlusTM Approach to Failure Rate Estimation [3]


My experience has shown that the calculated prognostic
MTBF of electronic equipment, using MIL-HDBK-217,
should be at least or about twice of the required MTBF in
order to have operational (actual, correct) MTBF equal to
required (original) MTBF [16] and that was applied as a
rool when Parts Count reliability calculation has been
made [8, 9, 10, 11].

not cooperate with people who are familiar with the


kowledge about designing reliable equipment.
Except of that, calculation of hardware reliability is also
faced with a number o problems. In [20] is stated that
there is no standard method for creating hardware
reliability prediction, so predictions vary widely in terms
of methological rigor, data quality, extent of analisys, and
uncertainty, and documentation of the prediction process
employed is often not presented. Because of that IEEE
has created a standard IEEE Std.1413 (Standard
Framework for the Reliability Prediction of Hardware) in
2009 [20].

This is not only my experience that the problem is also in


inadequate input data of electronic elements and
anedequate estimation of this input data in the calculation
of reliability.
Also not only my expierence has shown that the problem
is also in establishing the good requirements for
reliability. People who establishes requirements are not
familiar with establishing reliability requirement, so these
requirements are unreal, usualy higher than it can be
achieved in cost-effective way.

The IEEE 1413 Standard focuses not on selecting or using


any specific prediction methodology, but on rigor of what
methodology is selected. This standard provides a clear
set of guidelines that, when followed, will provide the
user of the prediction a better undestanding of the true
value of the prediction. But the problem is that people
involved in establishing reliability reguirements,
designing equipment and calculating reliability of the
equipment are not familiar with the methodology defined
in this standard.

Problem is also inadequate cooperation of engineers who


design the equipment and engineers who calculate the
equipment reliability, because the engineers who designs
equipment do not have knowledge about reliability and do
697

RELIABILITYPREDICTIONOFELECTRONICEQUIPMENT:PROBLEMSANDEXPERIENCE

OTEH2016

points in mind [22]:

4. SOFTWARE RELIABLITY

The contributions of people, both designers and


operators, to the failure of technological systems must
be integrated into reliability analysis because they
cannot be separated.

Software reliability is an important attribute determining


the quality of the software as a product. There are many
models of software reliability, but the requirements for
the reliability of software are often not adequately
specified if specified at all. The problem is in different
nature of software compared to hardware. Although
defined as probabilistic function software reliability is not
a direct function of time. Another problem is that
techniques for software reliability prediction are rarely
used as routine software engineering practices. It calls for
collaboration between software and reliability subject
matter experts to take appropriate steps to include
software into the reliability case for the system [21].

Failures are preventable, but only with experience.


Procedures, rules, codes, standards and laws cannot
prevent system failures.
Prediction is impossible unless learning is measured.
I considered human reliability important from the
beginning of my work in reliability, so human reliability
is included in my textbooks [18, 19].

6. RELIABILITY OF CLOUD SERVICES

The real issue with reliable software is that the critical


function fail safe. Failing safe is often misunderstood and
is often misinterpreted as never failing. Software safety
and software reliability are allies in the realization of their
mutual goal of developing safe and reliable software. And
again there is a need for cooperation between software
and reliability engineers. But, a few educational
institutions or industry professionals teach the basic of
software reliability and it dependence upon software
safety to be effective [19].

With the emergence of cloud computing and online


services, customers expect services to be available
whenever they need themjust like electricity or dial
tone. This expectation requires organizations that build
and support cloud services to plan for probable failures
and have mechanisms that allow rapid recovery from such
failures. Cloud services are complex and have many
dependencies, so it is important that all members of a
service providers organization understand their role in
making the service they provide as reliable as possible
[23].

Redunding of software is a special problem, because


software is different form hardware, and every copy of
software has the error at the same place[19].

When formulating cloud infrastructure, one must consider


the issue of reliability and uptime and ask his/her service
provider to configure his/her computing infrastructure for
redundancy and failover [23, 24].

There are many models for software reliability


assessment, but none of these is generaly accepted.

5. HUMAN RELIABLITY

Cloud service providers and their customers have varying


degrees of responsibility, depending on the cloud service
model. For example, for infrastructure as a service (IaaS)
offerings, such as a virtual machines, the provider and the
customer share responsibilities, Although the customer is
responsible for ensuring that the solutions they build on
the offering run in a reliable manner, the provider is still
ultimately responsible for the reliability of the
infrastructure (core compute, network, and storage)
components [23].

Human reliability can mean preventing accidents and


minimizing the consequences of accidents that do occur.
the effects of decisions made by people to act or not to act
have consequences for the technological systems they
operate. Disasters and major system failures are
frequently a sequence of events where one or more people
have made a decision or taken some action while
operating, maintaining or repairing some technological
system. When these potential consequences are
significant, such as catastrophic loss of equipment, long
term damage to the environment, or loss of life, then
reliability engineers working collaboratively with others
(such as risk management, human factors and safety
engineers) can have an important impact [22].

In a local area network (LAN), redundancy used to mean


that another server or two were added to the datacenter in
case there was a problem. These days with virtualization,
redundancy might mean a virtual server being cloned onto
the same device, or all the virtual servers of one machine
being cloned onto a second physical server. It becomes
more complex in the cloud. While one may think of
his/her server being hosted at the datacenter of his/her
cloud provider, its not as easy to determine where. Parts
of his/her data may be housed in one location and other
parts scattered throughout the country (possibly even the
world). And when the provider adds a redundant system,
again the data is scattered throughout their cloud. So its
not an issue of the service provider wheeling in a new
server to provide redundant services. Rather, they simply
reallocate resources to give you a redundant system.

There are different approaches and models to human


reliability: Domino Model of Accident Causation, Human
Factors Analysis and Classification System (HFACS),
Human Error Assessment and Reduction Technique
(HEART), Technique for Human Error Rate Prediction
(THERP), Performance Variability Models, Learning
Models, etc. None of these approaches offers a complete
solution, but collectively they offer some useful guidance.
For example, in the US, the FAA (Federal Aviation
Administration) uses the HFACS as a tool in
understanding the role of human error in aviation
accidents [22].

If a cloud storage system is unreliable no one wants to


save data on an unreliable system. Reputation is important

Human reliability engineers should keep the following


698

RELIABILITYPREDICTIONOFELECTRONICEQUIPMENT:PROBLEMSANDEXPERIENCE

to cloud storage providers. If there is a perception that the


provider is unreliable, they wont have many clients. And
if they are unreliable, they wont be around long, as there
are so many players in the market.

OTEH2016

equipment. And where and when to acquire this


knowledge? Having in mind that this konwledge is
needed during designing product, and if the knovwledge
for the designing is acquiring during basic studies, then I
think that it is the best way to gain the basic knowledge
about reliability is also during basic studies (graduate or
undergarduate). But, now it is rarely the case in Serbia.
Faulties, including Mlitary Academy, which had subject
such Realiability and Maintainability in their curriculum,
omited this subject during transition from previous
education system to so caled Bologna system. Now there
few faculties in Serbia with such subjects in graduate or
undergraduate studies, except Iformation technoogy
School, which is school for applied studies.

Cloud-based systems are very compex, but have an


inherent issue of reliability, because of their scale and
complexity. Cloud services become more reliable when
they are designed to quickly recover from unavoidable
failures, particularly those that are out of an organization's
control.

7. RELIABILITY OF INTERNET OF THINGS


In recent years, with improvement in Internet connectivity
and advances in smart personal computing devices, the
Internet of Things (IoT), along with its applications and
supporting hardware platforms, has become a hot topic in
both academic and practitioner communities. IoT systems
can be deployed in many scenarios, where the scale of
IoT deployments can vary from personal wearables to
city-wide infrastructures [25].

9. RELIABILITY CULTURE
The most important part of the development of a
reliability program in the organization is to have a culture
of reliability. It is extremely important that everyone who
is involved in the creation of products, from top to down,
realize that a good reliability program is necessary for the
success of the organization.

However, these deployments, as IoT implementations,


depend heavily on the Internet connectivity, therefore on
the network infrastructure. In large-scale IoT deployments
like those in smart cities and smart communities, failures
in the network infrastructure can be fatal to the operation
when large events or emergencies stress or strike the
public network facilities. Enabling the reliability and
resilience of large-scale IoT deployments is critical in
these scenarios and promising for further researches [25].

Achieving this culture of reliability is harder than you


might imagine [12].
The first step in building a culture of reliability is support
of the top management of the organization. Without this
support, application of reliability program is difficult to
achieve. It is also important that good reliability program
will result in financial gains, especially because of the
reduced cost of the warranty period and higher customer
satisfaction, but it is not easy to see in the beginning.

There are several research challenges that must be


resolved to support the operation of IoT systems for
communities. The first challenge is that of scale, i.e. the
huge number of devices. In community-wide IoT systems,
the number of participating devices can make a big
difference in the design of system architecture and
influence the network infrastructures. Bring in mobility
and crowd participation can make a bigger challenge on
this. The second challenge is that of the dynamics. Both
the physical and networking environment in communities
can change. Mobility brings in more changes and
adaptation to changes is important. The third challenge is
that of the inter-operability. With the growing number and
heterogeneity of IoT devices, the interoperation and
coordination are keys to make all these device, platforms,
and their supporting software an integrated system instead
of a pile of independent pieces [25].

It is important to have the understanding and support of


the whole organization. Since the introduction of the
program will affect of all work of middle-level
administrative structures, engineers and technicians, they
need to understand the benefits of such work. It is
important to be shown that these activities, which they
initially may seem unnecessary or even counterproductive
in the end, contribute to the success of the organization.
For example, if the technicians are well understood how
to use the data obtained by testing, the better they will
perform tests and record data. All in all, a reliability
program is a great opportunity for the success of the
organization if everyone in the organization understands
what it contributes and supports the actions that it
introduced.

IoT is obviously very complex, so it is difficult, almost


impossible, to determine the analytical solution for the
reliability and availability of such complex system, so the
simulation methods, which, in reality, give estimation, are
justified and desirable solving this problem [26, 27].

10. CONCLUSION

8. EDUCATION FOR REALIABILITY

In spite of its limitations, MIL-HDBK-217 is still used by


more than 80% of engineers in calculating reliability. Of
course, there are other industrial and commercial
standards for calculating reliability (on the second place
are PRISM and Telcordia).

From chapter 2 it is obvious that we cannot use only MILHDBK-217 to predict reliabiliy, because some methods
and data have been obsolete. We must combine it with
other methods, for example physics of failure, bu it needs
a lot of new knowledge and skills for the designers of a

Because of limitations of MIL-HDBK-217 and problems


of upgrading, DoD recommended that not only MILHDBK-217 is to be used in the calculation, and serious
engineering evaluation of the input data in the calculation
must be performed. This requires additional knowledge of

699

RELIABILITYPREDICTIONOFELECTRONICEQUIPMENT:PROBLEMSANDEXPERIENCE

the engineers involved in the design of devices and


reliability calculations, and close cooperation of the
engineers, which in practice is not adequate, so an
eduquate education of engineers in the field of reliability
is necessary. In that IEEE Std.1413 can be useful.

OTEH2016

[12] Pokorni S., Reliability and Maintainability of


technical systems: theory and practice, ICDQM
2009, proceedings, pp. 44-57, Belgrade, 25-26. june
2009. (in Serbian)
[13] Pokorni S., Ramovi R., The role of physics of
failure in assessment of reliability of up to date
technical systems, OTEH 2005, Belgrade 06-07
December 2005, on CD (in Serbian)
[14] Pokorni S., Ramovi R., Assessment of high
reliability of military equipment by physics of
failure, OTEH 2007, Belgrade, 03-05 October 2007,
on CD (in Serbian)
[15] Pokorni S., Reliability human-technical systems,
Novi Glasnik 6/1999, pp. 11-17 (in Serbian)
[16] Pokorni S., Pavlovi S.: Reliability testing of
aircraft electronic equipment, ETAN, book I, pp.
107-113, Zagreb, 4 - 8. juna 1990. (in Serbian)
[17] Pokorni S., Analitycal-experimental method of
calculation of reliability of aircraft electronic
equipment, Military technical courier, 4/1988. pp.
420-423
[18] Pokorni S., Reliability and Maintainability of
technical systems, Military Academy, Belgrade,
2002. (in Serbian)
[19] Pokorni S., Reliability of information systems,
textbook, Information Technology School, Belgrade,
2014. (in Serbian)
[20] Elerath G. J., Pecht M., IEEE 1413: A Standard for
Reliability Predictions, IEEE Transactions on
Reliability, Vol. 61, No. 1, March 2012, pp. 125-129
[21] Kapur P. K., Measuring Software Quality (State of
the Art), 5th DQM International Conference Life
Cycle Engineering and Management ICDQM 2014,
proceedings, pp 3-45 27-28, Belgrade, 27-28 June
2014
[22] Franklin P., Introduction to Human Reliability
Engineering, Annual Reliability and Maintainability
Symposium, 2012
[23] Microsoft, An introduction to designing reliable
cloud services, January 2014
[24] Velte,T., Velte A., Elsenpeter R., Cloud Computing,
A Practical Approach, Mc Graw Hill, 2010
[25] Zhu Q., Enhancing Reliability of Community
Internet of Things Deployments with Mobility,
http://srds2015.cs.mcgill.ca/papers/paper6.pdf
[26] Pokorni S., Jankovi R.: Reliability Estimation of a
Complex Communication Network by Simulation,
19th Telecomunication forum TELFOR 2011,
Belgrade, Serbia, November 22-24, 2011, pp 226229, IEEE 978-1-4577-1500-6/11
[27] Pokorni S., Ostoji D., Brki D., Communication
network reliability and availability estimation by the
simulation method, Military technical courier,
4/2011, pp 7-14

RIACs 217PlusTM methodology and a software tool is a


replacement for MIL-HDBK-217, it is no longer free, it is
more comlex, and, but as a minimum, this methodology is
the same as with former MIL-HDBK-217.
New clallenges in reliability are cloud services and
Internet of things, which are very complex, and many
dependencies, so it puts new requirements on education in
reliability and reliability culture.
It is also important to have in mind that there are cost
tradeoffs associated with reliability solutions that need
consideration in order to implement a sufficient reliability
at optimal cost.

References
[1] Pokorni S., Problems of reliability prediction of
electronic equipment, 6th International Scientific
Conference on Defensive Technologies OTEH 2014,
Belgrade, 09-10 September 2014, pp 835-838
[2] RIAC, Handbook of 217Plus Reliability Prediction
Models, The Journal of Reliability Information
Analysis Center, Third Quarter, 2006.
[3] Nicholls D., An Overview of the 217PlusTM, The
Journal of Reliability Information Analysis Center,
Forth Quarter, 2006.
[4] McLeish J. G., Enhancing MIL-HDBK-217
Reliability Predictions with Physics of Failure
Methods,
Reliability
and
Maintainability
Symposium, RAMS 2010.
[5] https://www.quanterion.com/product/publications/hd
bk-217plus-2015 (viewed April 2016)
[6] Pokorni S., Approach to determining the reliability of
electronic devices in the operating conditions of the
aircraft, magister thesis, Faculty of electrical
engineering, Sarajevo, 1985. (in Serbian)
[7] Pokorni S., Pavlovi S., Study of checking the
reliability of the device for improving the stability
and controllability of aircraft, project, VZ "ORAO",
Rajlovac, Sarajevo, 1990. (in Serbian)
[8] Pokorni S., Parts count reliability calculation of
controllable camera mount (project), Energoinvest,
Sarajevo, 1990. (in Serbian)
[9] Pokorni S., Milanovi V., Reliability calculation of
electronic switch for car headlights AUTO LIGHTS
12-24 for Elektrometal - Ni, Faculty of electrical
engineering, Belgrade, 2008. (in Serbian)
[10] Pokorni S., Reliability calculation of microwave lownoise amplifier, MTT INFIZ, Belgrade, 2014. (in
Serbian)
[11] Pokorni S., Reliability calculation of digital moving
target indication DBSO-11, DBSO-31, DBSO-128,
MTT INFIZ, Belgrade, 2014 (in Serbian)

700

APPLICATION OF INNOVATION STANDARDS IN THE FIELD OF


WEAPONRY AND MILITARY EQUIPMENT
DUAN RAJI
Innovation Centre of Faculty of Technology and Metallurgy, University of Belgrade, rajic.dusan1@gmail.com
OBRAD ABARKAPA
Faculty of Applied Management, Economics and Finance (Belgrade), Academy of Economy University, Novi Sad,
obrad.cabarkapa@gmail.com

Abstract: Weaponry and military equipment buyers are increasingly looking for more innovative products, those that
are results of a great deal of knowledge and that are more efficient in comparison to other high quality products in the
same category. With this aim in view, this paper emphasizes the importance of application of innovation standards
TS16555. The listed innovation standards have a potential application in the field of WME (weaponry and military
equipment), especially in the sector of production, service delivery, marketing, organization innovation etc. However,
introducing the listed standards does not mean that the innovative productivity will automatically increase within
business entities. This is why it is also important to introduce creative engineering into everyday practice. It has its
mainstay in TRIZ (Theory of Solving Innovative Tasks). TRIZ itself possesses various tools for efficient solving of
technical and technological problems of different complexity levels. One of the most important tools is the application
of 76 standards. This decision could contribute to the occurrence of innovations in the field of WME, which would be
based on science, which would possess a higher innovative level, and as such, have significantly higher chances of
success on the world market.
Key words: innovations, weaponry, military equipment, innovation standards, TRIZ, creative engineering.
standard TS16555 [4-10], contributes to the survival and
further development of economic entities in the field of
domestic defense industry. The listed standards have a
potential application in product innovation, customer
service, marketing, the organization itself, etc. However,
their introduction in economic subjects does not guarantee
success in creativity and innovation by itself. This requires
the application of innovative methodologies such as the
theory of solving inventive tasks (TRIZ) and its standards
of innovation, as well as other tools that connect the QMS
and TRIZ [2, 11]. It is also necessary to educate experts in
technical fields (those who are expected to create
innovation), and management (those who are expected to
create the conditions for successful innovative work).

1. INTRODUCTION
Although domestic economic entities in the field of
weaponry and military equipment (WME) have integrated
a quality system (IQMS), it is clear that high quality
business is not in itself a guarantee of survival on the
market. Out of two companies that own IQMS, the more
successful on the market will be the one whose WME
integrates a larger share of inventive activities [1-3].
In today's knowledge society it is necessary to expand the
concept of innovation development onto the concept of
intellectual property, which represents a comprehensive
concept that deals with creative engineering [2].
Most powerful western companies lend or outsource a major
part of their production to other companies, and they
concentrate almost exclusively on creating new products and
designs, while promoting their brands (trademarks) to attract
consumers. This means that products are designed and
developed in one place, in one company, and that production
usually takes place elsewhere. For such companies, the value
of their material goods can be much lower than the value of
intellectual property (such as brand value or ownership of
exclusive rights to key technologies and attractive design).
This shows the importance of intellectual property in modern
business.

The aim of this work is contributing to the establishment


of a modern system of innovative activity in Serbian
defense industry, based on the principles of modern
science and positive practical experience in this area
within
successful
innovative
companies
and
organizations.

2. IMPORTANCE OF PRODUCT
INNOVATION
Each product, just like a biological system, has its own
life cycle, which consists of procreation (creation),
growth (development), maturity (peak) and death
(obsolescence).

Implementation of the new quality management system,


which would include the implementation of innovative
701

APPLICATIONOFINNOVATIONSTANDARDSINTHEFIELDOFWEAPONRYANDMILITARYEQUIPMENT

Fig. 1 shows the life cycle of a product on the market and


it can be described by a Gaussian curve, indicating the
change of pace in the technical system (TS) development
in time.

OTEH2016

needs to know the "life line" of the TS he is working on


or is trying to improve.
The most accurate sign of "aging" of a TS is a sudden
increase in the number of patents for its improvements
over a very short time period. Those registrated
inventions are usually of 1st or 2nd inventive level, ie.
small improvements that do not reach the basic principles.
However, sometimes it happens that some extinguished
TS returns, but in a different, higher quality version. This
phenomenon is called technics' spiral development. This
occurs in situations when a TS is primarily expected to
meet ecological cleanliness requirements as well as
efficiency at the expense of speed and power for example
[2]. For example, producing electric energy by using wind
turbine or solar energy instead of using thermal power
plants.

Figure 1: a way of prolonging a products life


"Childhood" of a TS generally lasts quite long. On the
Gaussian curve it is the upward rising part. This is when a
TS is being designed, processed and the experimental
sample is being prepared, as well as the preparation for
mass production. Then, the "bloom" of the TS occurs, at
which point it radically improves and becomes more
powerful and productive. TS is produced in series, its
quality improves and the market demand for it grows.
After a while, it is becoming increasingly difficult for
TS's improvement to meet the growing needs of a man.
This is the moment TS changes its outer marker, but
remains the same as it was, with all its shortcomings. All
his resources are exhausted. The "aging" of the TS starts
and it is shown in the drop-down side of the Gaussian
curve. If at this moment the quantitative indicators of the
system artificially increase or develop its capacities and
dimensions, then the TS itself enters into conflict with the
environment and humans. Then the TS begins to cause
more harm.

The realized profit from a new product (invention) is


directly dependent on the degree of protection. For
example, in the pharmaceutical industry, a patent for the
product (product patent) provides companies with a
monopoly, and the highest percentage of a possible
achievable profit (90%). A patent for the process (process
patent) is a lower level of protection and a lower
percentage of profit is feasible (70%). A protected form of
the product (model, sample), depending on a degree of
protection could bring about 60% of the potential profits
to companies. A product protected by a stamp brings
about 50% of the profits, while the product without the
trademark yields only 20-30% of a possible profit [2].
A similar logic can be applied to any other product,
including WME ones.
So, for the needs of our defense industry, a profitable
business formula should be sought in the policy of
creating innovative products [3]. Deterioration of the
structure of our defense industry can be noticed when the
export opportunities decline, because of the lack of
competitiveness of WPE products on the market. Product
competitiveness on the global market can only be
achieved through its innovation and quality.

After this, a new TS appears, better and more


sophisticated than the above mentioned. But then the
same "life cycle" is repeated in this one too, from the
beginning to the end, and so on, indefinitely.
Today, about 80% of the products on the most advanced
WME market ARE under the age of 5 years [2, 11]. It
should be noted that the product development time is
being shortened in order to increase competitiveness on
the market and reduce the development costs.

Todays' innovation is based on teamwork of highly


educated specialists in technical professions, who, by
planned survey and with a generous financial support
from their employers or other investors, consciously
"target" certain new technical solutions. In this sense, the
key resources of invention are education, organization and
capital.

It should also be noted that TS shift does not take place


quickly and at once. The old TS is very strong, and highly
resistant to change. It has its own base developed: the
services, the designers' services, the sales network, which
constantly "injects shots" in a form of small
improvements, thus extending its "life." The new TS,
although it is ready for an independent "life", it will still
be using the experience of the old TS for a long time.

A company's pantenting of inventions can be a huge


advantage and can provide exclusive rights of use and
invention exploitation up to 20 years from the patent
application date. In addition, patent protection can also
provide [12]:
Good status on the market

In technics developing there is no revolution, only


evolution and a gradual handover of functions from an old
TS to a new one. [2]. If some unexpected mutations occur
in technics, they are either temporary or premature,
because each TS is connected by thousands of threads
with other systems, allowing them to stick together. A
sudden separation from this environment and its ignoring
"condemns the system to death". This is why an innovator

Commercialization of the investment


Licensing of the invention
A strong negotiating position
Preventing another party from patenting the invention
The competition will not be able to exploit the
invention
702

APPLICATIONOFINNOVATIONSTANDARDSINTHEFIELDOFWEAPONRYANDMILITARYEQUIPMENT

Possibility of licensing, sale or transfer of technology


secured.

OTEH2016

field) act detrimentally to one another. It is necessary to


destroy the harmful su-field. Consult the section on
standards for su-field demolition. It is noticed that
different, above-mentioned tasks, fall under one standard,
labeled 1.2.2., which states: "If a coupled beneficial effect
appears between the two substances in su-field, with no
need for preservation of the direct contact between the
substances, and with the use of indirect external
substances not being permitted or being considered
meaningless, the task is solved by introducing a third
substance between the two existing ones, which
represents their modification". A more detailed guidance
to be found in the standard' chapter 5.1.1. which reads as
follows: "If it is necessary to introduce a substance into
the system, and task conditions do not allow it or it is
unadmissible under the terms of the system, the bypass
ways should be used: 1. An empty space is introduced
instead of the substance; 2. A field is introduces instead of
the substance; 3. External supplements are used instead of
the inner supplements; 4. A very active supplement is
introduced in small doses; 5. A simple supplement is
introduced in very small doses, but it is distributed
concentrated on particular elements of the object; 6. The
supplement is introduced temporarily; 7. Instead of the
object itself, its copy (model) is used and the introduction
of supplements is allowed in it; 8. The supplement is
obtained by decomposition of the environment or the
object itself, for example, by electrolysis or by changing a
physical state of a part of the object or external
environment".

Innovation being the main driver that can lead an


organization to success, it is necessary to apply, in
parallel with the system of quality, innovation quality
system TS 16555 as well. It includes [4-9]: system of
innovation
management,
strategic
information
management, innovative thinking, intellectual property
management, collaboration and creativity as well as the
evaluation
of
innovation
management.
The
implementation of this system can bring a variety of
benefits to organizations- the benefits which, in addition
to revenue growth and increased profits, also relate to the
introduction of a new way of thinking and creating of new
values, thereby allowing a better understanding of the
needs and market opportunities, helping in the assessment
of business risks and making proposals for their
overcome, contributing to the overall creative thinking of
the whole organization and encouraging the involvement
of all employees through teamwork.
Although our country aspires to join the EU, given the
economic opportunities of the domestic defense industry,
it is recommendable to create conditions (through
legislation) for the development and mass application of
the evolutionary innovation creativity system, that has
contributed to the economic development of Far Eastern
companies, and their countries, such as China, Japan,
South Korea, Singapore and others. [13].

3. TRIZ STANDARDS' APPLICATION IN


SOLVING INNOVATIVE PROBLEMS

So, when we think about the above mentioned standards


for solving the described tasks and the recommendations
for its use, in order to solve the problem, the third
substance should be introduced (preferably one that
would consist of the ones given in the TS), and the
electric field should be released through it. As a result of
electroosmosis, a thin layer of water will form between
them and it will prevent the conflicting elements' sticking
together. Based on the described solution, we can
conclude that in the basis of all these tasks (and the like)
lies a general physical phenomenon indicated by the
standards. From the decribed example it can be seen that
the standards for solving inventive tasks represent the
rules of synthesis and transformation of TS, which are a
direct result of the law of development of these systems.

It is not easy to accept that innovative work is not a matter


of innovators' talent, rather simple work on successful
implementation of the known laws of nature and laws of
development of the technical systems. Based on the fact
that the physical laws occur in different situations and
always work, further developments of a situation can be
calculated or predicted. Similarly, the laws of technical
development are unique for all its systems, and they also
allow prediction or calculation of its elements' behaviour.
The standards that carry concentrated experience of
millions of innovators are based on this fact.
Example:

It can be determined that the standards are practical ways


of realization of the law of development, applicable to
different types of physical contradictions. Therefore,
many examples that illustrate the effect of the law of
development in TS can be used as examples of the
standards realization.

How to reduce the consumption of oil in bakeries, that is


used for lubrication of working surfaces of technological
equipment in order to prevent pastry products from
sticking to them [14]? How to increase the productivity of
an excavator working on the excavation of clay, when its
bucket rotor permanently gets clogged by clay? How to
facilitate the removal of the metal cladding attached to the
hardened concrete mixture? How to increase the speed of
tamping the metal poles in the hard, irrigated soil?

According to contemporary notions, TS developmet


follows the line: incomplete su-field systems - complete
su-fields - complex su-fields - forced su-fields. In each of
the links of this chain there is a possiblity to pass to
either, a lower sub-system level, or a higher upper-system
level.

All of these listed tasks as well as hundreds of other


similar ones, can be solved by using a single standard.
Here is the analysis. In the listed examples there are two
mutually conflicting substances (dough- equipment; clay bucket rotor; pole- soil, concrete metal cladding), which
under the influence of intermolecular forces (adhesion

It is too bold to suggest that the standard applies to all


tasks. That instrument is in the stage of development, in
accordance with the understanding of the law of TS
703

APPLICATIONOFINNOVATIONSTANDARDSINTHEFIELDOFWEAPONRYANDMILITARYEQUIPMENT

development, which is gaining more strength rapidly.

OTEH2016

to create conditions for generating innovations that are


based on a scientific methodology, rather than an
individual's intuition or inspiration.

How do the standards differ from the principles of the


elimination of technical contradictions? Principles
indicate only the general path and a fairly wide area
within which they can provide a solution, while the
standards recommend specific actions necessary for the
restoration of the working capacity of the existing system
or the synthesis of a new one. Besides, as a rule, the
standards provide not only one method but a combination
of them based on the physical effect. This qualitatively
increases the possibility of obtaining an effective solution
to the problem.

The application of TRIZ's 76 standards can solve


numerous technical and technological problems that have
been encountered in the domestic defense industry in a
more effective way.

ACKNOWLEDGMENTS
Serbian Ministry of Education and Science support this
work, grant no. TR34034 (2011-2016).

According to Altshuler, there are 76 standards, which are


divided into five major classes [2, 15-17]:

References
,.: (2007)

, , .
Raji,D., Kamberovi,., akula,B.: (2016) Kreativni
inenjering, Beograd, ICTMF.
Raji,D., akula,B., Jovanovi,V.: (2006) Uticaj
industrijske svojine na tehniko-tehnoloki faktor
odbrane, Vojnotehniki glasnik, 4, 485-501.
CEN/TS16555 (2013) Innovation Management Part 1:
Innovation Management System.
CEN/TS16555 (2014) Innovation Management Part 2:
Strategic intelligence management.
CEN/TS16555 (2014) Innovation Management Part 3:
Innovation thinking.
CEN/TS16555 (2014) Innovation Management Part 4:
Intellectual property management.
CEN/TS16555 (2014) Innovation Management Part 5:
Collaboration management.
CEN/TS16555 (2014) Innovation Management Part 6:
Creativity management.
CEN/TS16555 (2014) Innovation Management Part 7.
Raji,D., akula,B., Jovanovi,V.: (2006) Uvod u TRIZ ili
kako postati kreativan u tehnici, Beograd, SIG,
http://www.sigonline.rs/files/File/knjige/uvodutriz.pdf, 9.
2015.

(2010) ,
.
Masaki,I.: (2008) Kaizen Klju japanskog poslovnog
uspeha, Beograd, Mono i Manjana.
,.: (1999)
, , .
p,..: (2004)
, ., " p", 2- ., . : ,.- .
,.: (1989) . .,
" pp", 3- .,. .
p..:
H

(1991)
Hp,"H", 2- .

1. The creation and destruction of the su-field systems


2. Development of the su/field system
3. Move to the upper-system and to the micro level
4. The standards for detection and measurement of a
system
5. Standards for standards' application.
Each standard class is divided into subclasses and
subgroups.
To solve a task, first it is necessary to determine the sufield formula, ie. discover which standard class the task
belongs to, then which sub-class and group it belongs to.
Special attention should be paid to the standards of the
fifth class.
They are applicable when there are complications in the
search with the fields or substances that are missing. This
class increases the ideality level of a TS which is being
worked on, because it is aimed at the maximum use of
resources, both substance and field, that exist in a given
system.

4. CONCLUSION
The paper points out the importance and necessity of
introducing the innovative TS16555 standard for
successful operations of business entities within the
domestic defense industry. It is proposed to implement
these standards in an Integrated Quality Management
System (IQMS).
By applying innovative standards, the work of successful
businesses improves, and those who fail get the
opportunity to change their undesirable position on the
market.
In order to maintain successful innovation creativity, it is
required to continue the education of persons who are
expected to generate innovation, and they are the experts
in the technical and technological field. Thus it is being
proposed for the employees in the field of defense
industry to be trained in "Creative Engineering", in order

704

SURFACE TEXTURE FILTRATION INTERNATIONAL STANDARDS


AND FILTRATIONS TECHNIQUE OVERVIEW
SRDJAN IVKOVI
Military Technical Institute Belgrade, Experimental Aerodynamics Division, Prototype Production Department,
srdjan.vti@gmail.com
BRANKA LUKOVI
branka.lukovic@mod.gov.rs
VELJKO PETROVI
Ministry of Defense, Material Resources Sector, Department for Defense Technologies, veljko.petrovic@mod.gov.rs

Abstract: Filtration is required at several reasons in the process of surface texture analysis. The main reason for using
a filter is to separate long-scale components from short-scale components. Filtration techniques are used in surface
metrology to separate the roughness component from the waviness component and the form component to calculate
parameters according to international standards. Paper described problems roughness and wavering of profiles and
surface area with a comprehensive reference to the relevant international standards.
Keywords: Filtration, Surface texture, Roughness, Waviness.
metrology to separate the roughness component from the
waviness component and the form component to calculate
parameters according to international standards.
Characterization surface parameters can be derived with
an aim to control the manufacturing process.

1. INTRODUCTION
An important application of metrology in industry arises
while inspecting geometrical attributes of manufactured
objects to verify whether they satisfy tolerances specified
during the product development phase.

ISO 3274 defines a measurement scheme shown on


picture 1. The profile measured by profile-meters is called
the extracted profile. It is sampled and digitized, and
represents an abstraction of the real surface [2].

Intuitively, a real surface is the set of infinite number of


points that separate a work-piece from its surrounding.
Given any physical work-piece, it is, of course,
impossible to come up with a computable mathematical
representation of this set; we need some additional
information about the resolution at which the real surface
is perceived. Theoretically, a mathematical model that
approximates the real surface can be obtained within any
measure of closeness by choosing the nesting parameter
very close to zero [4].
A real surface corresponding to a specified nesting
parameter is then partitioned into real integral features.
These features still contain infinite number of points.
During actual inspection, however, we sample only a
finite number of points on these features. These are called
extracted integral features. It turns out that sampling alone
is insufficient to extract a feature; it should be
accompanied by some smoothing to remove noise and
unwanted details from the measured data. Therefore,
techniques for extracting information on real integral
features involve both sampling and some filtration [5].
Filtration is required at several reasons in the process of
surface texture analysis. The main reason for using a filter
is to separate long-scale components from short-scale
components. We want to separate waviness from
roughness. Filtration techniques are used in surface

Picture 1. Procedure for obtaining Primary profile,


Roughness and Waviness (ISO 3274)
705

SURFACETEXTUREFILTRATIONINTERNATIONALSTANDARDSANDFILTRATIONSTECHNIQUEOVERVIEW

Before the measurement process is started, the section of


the surface that will be measured should be determined.
The reference system is placed so that the x-axis runs
perpendicular to the process traces. Several filtering
effects are introduced by the probe and the bandwidth of
the instrument). The real surface is modelled by a
mechanical surface (boundary) when measured with a
stylus, or by an electromagnetic surface which represents
the surface envelope sensed by an optical probe [ISO
14406].

OTEH2016

but contrary to profile parameters, they do not reflect the


previous filter operation in their name [5].

2. SURFACE TEXTURE INTERNATIONAL


STANDARDS
Engineers working in the field of surface texture should
know following GPS (Geometrical Product Specification)
ISO standards; Profile Surface Texture Standards:

For profiles, we have Pa (Mean line/curve for Primary


profile), Ra (Roughness parameters) and Wa (Waviness
parameters).
In contrast with naming rules used with profile
parameters, prefixes of the areal parameters do not reflect
the nature of the surface, distinguishing between
roughness and waviness. In the ISO 25178 standard, all
areal parameters start with the upper case letter S or the
upper case letter V.
For surfaces, we only have Sa, which can therefore be a
parameter of roughness, or waviness, or calculated on the
primary surface, depending upon the pre-filtering that is
carried out before the parameter is calculated. This
decision is based upon the multiplicity of processing and
filtering methods that are available to metrology engineer
for extracting information from a surface [1]. Processing
methods do not necessarily separate the surface texture
into two components that are roughness and waviness but
in certain cases alter the surface in a subtler manner [5].

ISO 1302 (GPS) - Indication of surface texture in


technical product documentation

ISO 3274 (GPS) - Surface texture: Profile method Nominal characteristics of contact (stylus)
instruments

ISO 4287 (GPS) - Surface texture: Profile method Terms, definitions and surface texture parameters

ISO 4288 (GPS) - Surface texture: Profile method Rules and procedures for the assessment of surface
texture

ISO 5436-1 (GPS) - Surface texture: Profile method Measurement standards - Part 1: Material measures

ISO 5436-2 (GPS) - Surface texture: Profile method Measurement standards - Part 2: Software
measurement standards

ISO 12085 (GPS) - Surface texture: Profile method Motif parameters

ISO 12179 (GPS) - Surface texture: Profile method Calibration of contact (stylus) instruments

ISO 13565-1 (GPS) - Surface texture: Profile method


- Surfaces having stratified functional properties Part 1: Filtering and general measurement conditions

ISO 13565-2 (GPS) - Surface texture: Profile method


- Surfaces having stratified functional properties Part 2: Height characterization using the linear
material ratio curve

ISO 13565-2 (GPS) - Surface texture: Profile method


- Surfaces having stratified functional properties Part 3: Height characterization using the material
probability curve

ISO 16610-1 (GPS) - Filtration - Part 1: Overview


and basic concepts.

Areal surface texture standards ISO 25178 consist of the


following parts:
Picture 2. Procedure for obtaining Primary surface, areal
waviness and roughness (ISO 25178)
Procedures, shown on pictures 1&2 are quite similar [2].
The vocabulary introduced in ISO 25178. The S-filter
removes short-scale components. The L-filter removes
long-scale components. The F-operator is the form
removal operation. Scale-limited surfaces SF surface or
SL surface are obtained after the respective filters or
form removal operations have been applied. Areal
parameters are then calculated on one of these surfaces,
706

Part 1: surface texture indications; specifies the rules


for indication of areal surface texture in technical
product documentation (e.g. drawings, specifications,
contracts, reports) by means of graphical symbols.

Part 2: terms, definitions and surface texture


parameters

Part 3: specification operators

Part 6: classification of methods for measuring


surface texture

Part 70: material measures for the calibration of

SURFACETEXTUREFILTRATIONINTERNATIONALSTANDARDSANDFILTRATIONSTECHNIQUEOVERVIEW

instruments

Part 71: soft gauges - SDF file format

Part 72: soft gauges - X3P file format

Part 600: nominal characteristics of surface texture


measuring instruments

Part 601: nominal characteristics of contact (stylus)


instruments

Part 602: nominal characteristics of non-contact


(confocal chromatic probe) instruments

Part 603: nominal characteristics of non-contact


(wave front interferometric microscope) instruments

Part 604: nominal characteristics of non-contact


(coherence scanning interferometry) instruments

Part 605: nominal characteristics of non-contact


(point autofocus profiling) instruments

Part 606: nominal characteristics of non-contact


(focus variation) instruments

Part 607: nominal characteristics of non-contact


(confocal) instruments

Part 700: calibration of surface texture measuring


instruments

Part 701: calibration and measurement standards for


contact (stylus) instruments

OTEH2016

The first filters were implemented as physical high-pass


filters using resistors and capacitors soldered behind a
selector. The initial aim was to avoid large signal
variations in order to draw the profile correctly on a
thermal band of paper, or to display a roughness average
indication on a dial indicator. These RC filters were used
for almost 30 years on all types of stylus profile-meters.
These filters could date back to two traditional filtration
systems emerged in 1950s [4], the mean-line based
system (M-system, picture 3.) and the envelope based
system (E-system, picture 4.).

Picture 3. The mean-line system


The M-system generates a reference line passing through
the measured profile from which the roughness is
assessed. Reference line, shown on picture 3. represent
reference line [4]. This line is called the mean line due to
the fact that the profile portions above and below the
reference line are equal in the sum of their areas.

Other useful international standards for surface texture:

ISO 1 (GPS) - Standard reference temperature for


geometrical product specification and verification

ISO 1101 (GPS) - Geometrical tolerancing Tolerances of form, orientation, location and run-out

ISO 8785 (GPS) - Surface imperfections - Terms,


definitions and parameters

ISO 14406 (GPS) - Extraction

ISO 14253 (GPS) - Inspection by measurement of


workpieces and measuring equipment - Part 1:
Decision rules for proving conformance or nonconformance with specifications

ISO 14638 (GPS) - Masterplan

ISO/IEC Guide 98-1:2009


Uncertainty
of
measurement - Part 1: Introduction of the expression
of uncertainty in measurement

ISO/IEC Guide 98-3:2008


Uncertainty
of
measurement - Part 3: Guide to the expression of
uncertainty in measurement (= GUM)

Picture 4. The envelope system; A-Roughness; BWaviness [4]


The E-system was acting totally differently; the E-system
appeared as a large disk rolling across over the profile
from above, and the covering envelope formed by the
rolling disk [4]. As shown in picture 4, the E-system
gains its basis from the simulation of the contact
phenomenon of two mating surfaces, whereby peak
features of the surface play a dominant role in the
interaction operation.
The fact is that the M-system and the E-system are
complement to each other, rather than compete against
each other and none of them can fulfill all the practical
demands by themselves alone.

ISO/IEC Guide 99:2007


International
vocabulary of metrology - Basic and general concepts
and associated terms (= VIM)

The M-system was greatly enriched by incorporating


advanced mathematical theories. The Gaussian regression
filter overcame the problem of end distortion and poor
performance of the Gaussian filter in the presence of
significant form component.

3. FILTRATION TECHNIQUE
Filtration is one of the core elements of analysis tools in
geometrical metrology. It is the means by which the
information of interest is extracted from the measured
data for further analysis. Noises are removed by filters
before fitting routines are applied to generate the
geometry of the measurand [2].

The E-system also experienced significant improvements.


Introducing mathematical morphology, morphological
filters emerged as the superset of the early envelope filter,
but offering more tools and capabilities.

707

SURFACETEXTUREFILTRATIONINTERNATIONALSTANDARDSANDFILTRATIONSTECHNIQUEOVERVIEW

OTEH2016

Filters can be classified in a certain hierarchy, picture 5.


Most of the filters used today in dimensional metrology
belong to the class of linear filters [3]. Especially the
following filters are used:

The filter has end-effects, i.e. data at both ends of the


filtered signal must be discarded.

Electrical RC filters, implemented by hardware

The filter has problems with signals, which have a


large curvature.

Phase correct 2RC filters, implemented by software

An adjustment of the signal before filtering is


necessary.

Gaussian filter, implemented by software

Spline filter, implemented by software

Finite periodic signals cannot be filtered, because of


the end-effects.

Robust Spline filter, implemented by software

The filter is not robust, i.e. sensitive to outliers.

4.2. Spline filter


The spline filter has been developed to overcome the
disadvantages of the linear shift invariant filters like the
Gaussian filter, picture 7. Spline filters are still linear
phase correct filters, but are not shift invariant filters.
They are implemented by software only, using a very fast
matrix algorithm. There exits a robust version of the
spline filter, which is insensitive to outliers [5].

Picture 5. Filters Classification

4. FILTRATION ACCORDING ISO 16610

Picture 7. Spline filter (green) vs. Gaussian filter (red) [6]

The following profile filters are published in the ISO


16610 series [1]:

Gaussian filters (ISO 16610-21)

Spline filters (ISO 16610-22)

Spline wavelets (ISO 16610-29)

Robust Gaussian regression filters (ISO 16610-31)

Robust Spline filters (ISO 16610-32)

Morphological filter (ISO 16610-41)

4.3. Wavelet filters


Wavelet filters are linear filters and can be used to remove
noise or outliers [1], picture 8.
Contrary to the Fourier transformation, the wavelet
decomposition allows not only to determine the
wavelength content of a measured profile, but also to
localize where a particular wavelength occurs.

4.1. Gaussian filter


The Gaussian filter, picture 6, belongs to the class of
linear shift invariant filters. The implementation is
possible by software only, because the filter is non-causal.
It is a phase correct filter with a symmetrical weighting
function [1]. The Gaussian filter has replaced the phase
correct 2RC filter.

Picture 8. (a) original profile with outlier; (b) outlier


removed by a wavelet filter [6]
The smooth part of a wavelength decomposition of a
profile corresponds to a low pass filter, while the detail
part corresponds to a high pass filter. The wavelength
decomposition, like the Fourier transformation, can be
reversed and allows thus the construction of wavelet
filters [5].

Picture 6. The Gaussian filter (ISO 16610-21) [6]

4.4. Robust Spline filter

The Gaussian filter has the following disadvantages [4]:

The filter is a continuous filter, i.e. the


implementation is arbitrary (no unique algorithm).

The robust Spline filter is applied as a profile filter in


roughness or form measurements, picture 9. The filtered
signal shows no unwanted deviation caused by deep holes
708

SURFACETEXTUREFILTRATIONINTERNATIONALSTANDARDSANDFILTRATIONSTECHNIQUEOVERVIEW

OTEH2016

Dilation and erosion are not filters; they are just


morphological operations [5]. When dilation is followed
by erosion, it is called a morphological closing filter. If
the sequence is reversed, i.e. erosion followed by dilation,
it is called a morphological opening filter.

or scratches in the surface (green line on picture 9.), as the


Gaussian filter does (red line on picture 9.). The robust
Spline filter is also insensitive to outliers.

When applied in alternating sequence, these two filters


can be used to selectively eliminate features of any given
"size" from the input data [3]. Closing and opening filters
can also be cascaded to create alternating symmetrical
filters.
Picture 9. The robust Spline filter (ISO 16610-32) [6]

5. CONCLUSION

4.5. Morphological filters

This paper provides a brief overview of the standards and


technique that are relevant to the texture of the surface.
Paper described problems roughness and wavering of
profiles and surface area with a comprehensive reference
to the relevant international standards. Paper offers to
metrology engineers a guideline to choose the appropriate
filter for various applications.

Morphological filters can be interpreted as a simulation of


the track of a reference point of a rigid solid body, as for
example the center of a ball, which is moving along the
surface of a workpiece being continuously in contact with
the surface to be filtered [4].
One of the main application of morphological operations
is the morphological reconstruction of a tactile measured
profile. Morphological filters are non-linear filters.

References
[1] SRPS EN ISO 16610-1:2015; Institute for
Standardization of Serbia.
[2] Tomov,M., Kuzinovski,M., Cichosz,P.: A New
Parameter of Statistic Equality of Sampling Lengths
in Surface Roughness Measurement, Strojniki
vestnik - Journal of Mechanical Engineering
59(2013)5, 339-348; DOI:10.5545/sv-jme.2012.606
[3] Srinivasan,V., Scott,P.J., Krystek,M.: ISO standards
for geometrical filters; Proceeding XVI IMEKO
World Congress; Vienna, AUSTRIA, 2000,
September 25-28
[4] Shan Lou Wen-Han Zeng, Xiang-Qian Jiang, Paul J.
Scott: Robust Filtration Techniques in Geometrical
Metrology and Their Comparison; International
Journal of Automation and Computing 10(1),
February 2013, 1-8; DOI: 10.1007/s11633-0130690-4
[5] Blateyron,F.: Good practices for the use of areal
filters; Conference proceedings, 3rd Seminar on
Surface Metrology of the Americas, Albuquerque
New
Mexico,
May
2014;
DOI:
10.13140/2.1.1007.9361
[6] Michael,K.: Filtration of data according to the new
ISO 16610 series, CENAM 5th Simposio de
Metrologa Quertaro, Mexico, 2008, online:
http://www.cenam.mx/ammc/eventos/evento2008/c
mu-mmc_2008_krystek.pdf , accessed 30.05.2016.

Picture 10. Morphological closing filter (upper red curve)


obtained by rolling a disk over the profile [5]
Two morphological operation called dilatation & erosion
are used to define a mechanical surface. If the disk is
rolled over the surface, it is called dilatation. The path of
its center is recorded (red line, picture 10). Note how this
fills "valleys" while preserving the "peaks" [4]. If the disk
is rolled below the surface, it is called erosion (gray
dotted line, picture 11). The path of disk envelope is
recorded. Note how this knocks out the "peaks" and
preserves the "valleys".

Picture 11. Morphological opening filter (upper red


curve) obtained by rolling a disk over the profile [5]

709

SYSTEM FOR REMOTE MONITORING AND CONTROL


OF HF-OTH RADAR
BOJAN DOLI
Institute VLATACOM, Belgrade, Serbia, bojan.dzolic@vlatacom.com
DEJAN NIKOLI
Institute VLATACOM, Belgrade, Serbia, dejan.nikolic@vlatacom.com
NIKOLA TOSI
Institute VLATACOM, Belgrade, Serbia, nikola.tosic@vlatacom.com
NIKOLA LEKI
Institute VLATACOM, Belgrade, Serbia, nikola.lekic@vlatacom.com
VLADIMIR D. ORLI
Institute VLATACOM, Belgrade, Serbia, vladimir.orlic@vlatacom.com
BRANISLAV M. TODOROVI
Institute VLATACOM, Belgrade, Serbia, branislav.todorovic@vlatacom.com

Abstract: Radars in the high frequency 3-30 MHz, which are using surface waves generate large coverage area. This allows
the radars of this type to be used as part of a system for monitoring the exclusive economic zones up to 200 nm. In this case,
high frequency radar is used as a sensor for monitoring the sea surface over the horizon. Remote monitoring and control of
Vlatacom High Frequency Over the Horizon Radar (HF-OTHR) is based on web technologies and is a solution for
monitoring system parameters and its management through the local area network and wide area network. This paper
describes the implemented system for remote monitoring and management of Vlatacom HF-OTH radars which in itself
combines standard commercial equipment based on IP addressing and specific web interface for RF power amplifiers and
radar sensor.
Keywords: HF-OTHR, remote monitoring and control, web interface, web application, RF power amplifier.
used for other purposes like oceanographic research or
pollution detection.

1. INTRODUCTION
Exclusive Economic Zone (EEZ) of 200 nm has been
defined by the United Nation convention. EEZ is a zone of
specific width that is stretched from territorial waters in
direction of open sea in which countries have exclusive
rights such as exploitation of biological and mineral
resources of the sea. Controlling this zone represents
technological, financial and organizational challenge [1,2].

HF-OTHR uses frequency bandwidth from 3 to 30 MHz


(HF). Surface wave HF-OTHRs utilize vertically-polarized,
surface electromagnetic waves, which propagate above sea
or ocean surface. Usage of surface wave HF-OTHR includes
specific problems, such as: influence of sea state on
electromagnetic wave propagation above sea surface,
curvature of Earths surface on characteristics of
transmitting and receiving antenna stations, interference of
other transmitting devices, atmospheric, space and manmade noise in vicinity of HF-OTHR and radar cross section
(RCS) of targets that need to be detected and tracked [1,4,5].

Integrated monitoring system (IMS) uses multiple types of


electronic sensors and communication devices which make
monitoring activity in EEZ possible. Since optical and
microwave sensors have limited range, special mobile
platforms, such as airplanes or satellites, could be used for
surveillance beyond horizon. This approach yields high cost
of IMS. The second approach for complete EEZ
surveillance is usage of HF-OTH radar networks. To be
more precise, HF-OTHRs are often a sensor of choice for
budget friendly EEZ control, since their price is
significantly lesser than price of aforementioned sensors and
their platforms. Vlatacom HF-OTHR (vHF-OTHR) are
currently in use all over the world either as standalone radar
or as a part of IMS [2,3]. Beside IMS, these radars may be

These types of radars are designed for unmanned operation


and do not require operators at the site. Apart form data
flow towards the Command and Control (C2) systems, vHFOTHRs control and parameter readings must be available
from C2. Data obtained from parameter readings optimizes
maintenance procedures and ultimately cuts exploitation
cost of a vHF-OTHR and thus makes vHF-OTHRs even
more appalling for IMS purposes.

710

OTEH2016

SYSTEMFORREMOTEMONITORINGANDCONTROLOFVHFOTHRADAR

Picture 1. vHF-OTHR site


Control centres in appropriate level of hierarchy), along
with provision of required power from power infrastructure
network.

Solution for remote control and parameter readings


implemented in vHF-OTHR is the subject of this paper. The
solution is based on commercially available components
which cuts the exploitation costs even further and eases
maintenance procedures.

According to the configuration of the system, shelters are


quite distant from each other. It is necessary to make local
network (LAN - Local Area Network) for each radar
system. Network parameters are defined to be able to use
internet protocols (TCP/IP) for each device within one
LAN. According to network parameters it is necessary to
define for each device a static IP (Internet Protocol) address,
subnet mask and default gateway. In order to all radar
systems could communicate with each other and to allow
remote access to each device, it is necessary to connect all
radar systems in radar network, and this is done through the
router. It is necessary to assign the number of subnets for all
LANs, to allow routers within the radar network to function
properly. For this reason, when designing the radar network,
it is necessary to make a clear IP plan, which will define
each device in the radar network.

2. SYSTEM DESCRIPTION
Vlatacom High Frequency Over the Horizon radar is a shore
based remote sensing system using the over the horizon
radar technology to monitor and track vessels. Based on
commercial HF signal acquisition core, it easily expands its
ship tracking functionality with monitoring of ocean surface
currents, waves and wind parameters.
The radar is continuously transmitting RF power, no gating
or pulsing sequences are used to provide best signal to noise
performance. The required de-coupling between transmitter
and receiver has to be achieved by means of using separate
locations for Rx and Tx antennas. This results in very
typical site geometry with two separated antenna arrays, like
sketched in Picture 1.

This addressing is also planned to be included for all


devices which will be described in next chapter, to all of the
above could be monitored and managed remotely.

Selected site geometry is of crucial importance: it defines


area of coverage and systems own robustness on selfinterference at the same time, and is directly implicated by
adopted operating frequency and selected output power.

2.2. Site Shelters


All the equipment shelters have appropriate thermal
insulation. Interior is separated into two spaces - working
area and equipment area. A fire suppression system is
installed in shelters, and the appropriate air conditioning
system. Four 19 server racks are placed in every shelter,
and there is place left for one additional rack if needed for
future expansions. Each rack has installed Power
Distribution Unit (PDU) which contains 16 outlets for
power supplying equipment. Within the PDU is located
controller which is connected via UTP cable to the switch.
Each PDU contains temperature sensor. It is a useful tool
because it is possible to have remote control and monitoring
equipment installed in racks.

2.1. Site organization


Both Rx and Tx segments of every particular vHF-OTHR
system is followed by its belonging shelter, to host system
hardware: dedicated and standard COTS equipment.
Shelters are located close to Rx/Tx antenna arrays, in order
to provide necessary spatial distance for cabling.
Tx and Rx shelters of one vHF-OTHR system are mutually
connected, in order to provide connection between two
segments of the system. The connection between the
shelters is realized via fibre optical cable as it shown in
Picture 2.

Rx shelters, as shown in Picture 3, have a front desk in


working area. The following equipment is placed in server
racks:

Rx shelter is further connected with back-side shelter /


station, for connection with other parts of overall
Surveillance system (data exchange with Command and
711

OTEH2016

SYSTEMFORREMOTEMONITORINGANDCONTROLOFVHFOTHRADAR

Picture 2. Site organization

Picture 3. Equipment shelter


satellite or some other type of connection. Rx and Tx
equipment shelters also have communication link trough
single mode fiber cable used for control signal and for
transferring of modulated signal.

Radar sensor with frequency controlling unit, two


receiving units, and power supply unit.
Two redundant interface PCs (backup and main)
Two network switches

3. TEHNICAL SOLUTION FOR REMOTE


MONITORING AND CONTROL

Redundant UPS system


Tx shelter is without front desk in working area. It has
following equipment placed in server racks:

For described system, a large number of different


components and individual subsystem is controlled by a
specially designed and implemented software application.

Five power amplifiers (four are in use, and one will be


used as a spare)

To remote control and monitoring of VHF-OTH radar is


designed web application that combines:

Power splitter
Two network switches (backup and main)

standard equipment based on IP principle,

Control laptop

specially developed web interface for the power amplifier,

Redundant UPS system

specially developed application for control of radar


equipment.

Power to Rx and Tx locations is provided from a central


control site location. Power for Tx equipment shelter is
provided through Rx equipment shelter. Also all processed
data are transferred from Rx equipment shelter to central
control site which is connected with main control center via

Each application is made on the web platform, and it is


necessary to use some of the existing web browsers such as
Internet Explorer, Mozilla Firefox, Opera, etc. By opening a
712

OTEH2016

SYSTEMFORREMOTEMONITORINGANDCONTROLOFVHFOTHRADAR

browser, it is necessary to enter the IP address of the device


and it will automatically open the page with web
application. Typing user name and password is realized full
access to the device.

and monitoring power of the radar sensor, etc.

3.1. Standard equipment based on IP principle


Standard equipment (described in the previous section, the
various segments of the system) is engaged in the
monitoring and control system by using regular methods of
IP addressing. In this way, the current status of equipment is
monitored and implemented elementary functions of
controlling. The corresponding methods are implemented
within dedicated software control. There is also a web
application for monitoring and control PDU and UPS
(Uninterruptible power supply). From these applications is
also possible to control the supply of certain components
and read their status. The application is shown in Picture 4.

Picture 5. Web application for power amplifier


The important thing about the operation of this RF power
amplifier is power level of the input signal. The power level
at the output of the driver of the radar sensors is quite high,
and it is necessary to reduce the power that amplifier would
not go to saturation. For that reason, sub-application is
designed in Vlatacom Institute, which gives solution to
remote decrease the output level of the sensor at the value
which is optimal for the regular working operation of the RF
power amplifier.

3.3. Web application for control of radar


equipment
The application is made on the web platform. Through this
application user can control all parameters of the sensor. It
is possible to change acquisition mode, working frequency,
bandwidth, chirp length, etc. It is very powerful application
because it gives us possibility to optimize the sensor.

Picture 4. Web application for PDU

3.2. Web interface for the power amplifier


The interface of the amplifier is specifically designed as a
web application, for the purposes of described products
vHF-OTHR. The big problem for HF radar is large
fluctuations of external noise. To be optimized complete
operation of the radar system, web application is designed
for remote access, shown in Figure 6. This application
provides features such as:
remote control and monitoring of the power amplifier,
load status, read active alarms, what is the direct power,
what is the reflected power, VSWR, etc.,
change the output power levels.
The last option is very useful, because just by changing the
level of output power will optimize the entire radar system.

Picture 6. Web application for radar sensor

Web applications for control of the radar sensor are shown


in picture 5. Through this application it is possible to control
entire system, read the status of the sensor components,
doing certain settings and calibration of the system, control

Described software subsystems can be accessed through


network protocols such as Telnet and SSH. In this case, by
entering the appropriate CLI (Command Line Interface)
713

OTEH2016

SYSTEMFORREMOTEMONITORINGANDCONTROLOFVHFOTHRADAR

pieces come up with the required information.

frequency, signal bandwidth and data rate. Since HF


interference is constantly scanned, eventually changes could
be addressed in a timely manner.

It is possible to access to the local computers and servers


through protocols such as Remote Desktop, VNC,
TeamViewer, etc.

Described system for remote monitoring and control of


vHF-OTHR provides continuous access to the network,
monitoring the current status, intervention in the event of a
fault, and complete control and optimization of the system.

Described software for monitoring and control of vHFOTHR has its own natural place within the main control
system, in a hierarchical regulated system of a coast control
at the national level. Additional protection of
communication between the C2 and the local site is
provided through Virtual Private Network (VPN). It is
possible to provide remote access, via VPN, from physically
independent remote location to C2 or local site, if there is a
network interface available. Also each C2 provide specific
access rights for officers, operators and other authorized
users. All of this is possible due to the previously adopted
concept.

References
[1] Sevgi,L.D., Ponsford,A.M., Chan,H.C., An integrated
maritime surveillance system based on high-frequency
surface-wave radars, Part 1: Theoretical background
and numerical simulations, IEEE Antennas and
Propagation Magazine, 43(4) (2001) 28-43.
[2] Sevgi,L.D., Ponsford,A.M., Chan,H. C., An integrated
maritime surveillance system based on high-frequency
surface-wave radars, Part 2: Operational status and
system performance, IEEE Antennas and Propagation
Magazine, 43(5) (2001) 52-63.
[3] Anderson,S.J. Optimizing HF Radar Siting for
Surveillance and Remote Sensing in the Strait of
Malacca, IEEE Transactions of Geoscience and
Remote Sensing, 51(3) (2013) 1805-1816.
[4] Fabrizio,G., High Frequency Over-the-Horizon Radar:
Fundamental Principles, Signal Processing, and
Practical Applications, McGraw-Hill, inc., 2013.
[5] Lekic,N., Nikolic,D., Milanovic.B., Vucicevic,D.,
Valjarevic,A., Todorovic,B.,M.: Impact of radar cross
section on HF radar surveillance area: Simulation
approach, IEEE Radar Conference Johannesburg,
(2015) 369-373

4. CONCLUSION
VHF-OTH radars are used as sensors that allow monitoring
of the exclusive economic zone (EEZ). As these radars
operate unmanned, it is necessary to ensure continuous
monitoring of the state of the radar and the possibility of
remote management.
System for power distribution (UPS and PDU) remote
control is based on commercially available equipment
controlled over LAN. Temperature and humidity sensors
provide insight at current microclimatic environment, which
have more than positive effect during exploitation.
Output power level control of RF power amplifier can be
organized in remote manner, along with reflected power
level monitoring and switching functions, through WEB
interface.
Dedicated web interface for HF radar sensor provides
control of the basic radar functions, such as carrier

714

MODEL OF IMPROVING MAINTENANCE OF


TELECOMMUNICATION DEVICES
VOJKAN RADONJI
Tehniki remontni zavod aak aak, e-mail: vojkan.r69@gmail.com
MILENKO IRI
Tehniki remontni zavod aak aak, e-mail: ciric.milenko4@gmail.com
BRANKO RESIMI
Tehniki remontni zavod aak aak, e-mail: resimicbranko@gmail.com
IVAN MILOJEVI
Uprava za logistiku GVS Beograd, e-mail: ivanmiloj@yahoo.com

Abstract: The conducted research determines the maintenance technology model based on the structural and technical
characteristics of the device. There are several variants of maintenance organization, based on such technology model. The
assessment criteria of the presumed variants of maintenance organization have been determined. The optimum variant has
been selected by means of the PROMETHEE method. The selected option has a maximum criterion of readiness with
minimal maintenance costs, thus achieving the optimal maintenance organization.
The determined technology model and the optimal maintenance organization make the maintenance system of this type of
radio-relay devices efficient.
Keywords: radio-relay device, technology, maintenance, PROMETHEE Method.
practice, the research has reached the following result: the
maximum readiness of devices with minimal maintenance
costs. In this manner, the two opposing criteria have been
reconciled, which makes this variant of maintenance
organization the OPTIMAL MAINTENANCE and makes
the maintenance system efficient.

1. INTRODUCTION
In order to achieve the optimal organization of the radiorelay devices maintenance, the technology and the
organization of devices maintenance, as elements of the
system maintenance, have to be analyzed simultaneously.
The model of maintenance technology in the conducted
research was based on the structural characteristics of this
type of devices. It is assumed that preventive and, in part,
corrective maintenance would be implemented on the
device. The manner of implementation of preventive and
corrective maintenance, determines the technology model.

The objective of the research is to determine the model of


maintenance technology and select the optimal organization
of digital radio-relay devices maintenance. The purpose of
the devices is the transmission of voice and data. Several
such territorially distributed devices make the network of
radio-relay devices. Voice and data transmission is realized
through the radio-relay link and a protected computer
network. Precisely this way of interpersonal communication
is used to determine the technology model, which would
improve the current method of the maintenance of this type
of devices.

In accordance with the technology model, determined in the


conducted research, in a specific territorial system
distribution within which the radio-relay devices operate,
several variants of maintenance organization will be
presumed.
Multi-criteria
method,
PROMETHEE
(Preference Ranking Organization METHod for
Enrichment Evaluations), will be the means of selecting the
optimum variant of maintenance organization for this type
of devices and the systems within which the said devices
operate.

Different forms of maintenance technology can be defined,


which leads to a multi-criteria problem that needs to be
solved.
Contemporary radio-relay devices have many advantages at
technical and structural aspects. Exactly those advantages
will be used to improve the aspect of maintenance function
in the conducted research. The advantages are reflected in a
greater transmission capacity, fast data transmission and
availability of data in real time.

The research, based on a determined technology model and


selected optimum maintenance organization variant, make
the maintenance of this type of devices OPTIMAL.
Optimal maintenance is determined on the basis of two main
criteria: readiness and maintenance costs. In the particular
maintenance model, which is a common example in
715

OTEH2016

MODELOFIMPROVINGMAINTENANCEOFTELECOMMUNICATIONDEVICES

The objective of the determined model of maintenance


technology is to access the MIB database of the device via
the radio-relay link or a protected computer network. The
downloaded data refer to: the results of the specified forms
of device testing, monitored and system parameters of the
device. Exclusivity of obtaining the parameters of device
self-testing is based on the work of the operating system the
device (Fig. 1).

2. DETERMINING THE MODEL OF


MAINTENANCE TECHNOLOGY
Determining the technology model shall be based on
structural and technical characteristics of the device. Digital
device has been designed with modern electronics,
telecommunications and informatics technology. It consists
of several types of modules whose work is managed by a
central processing unit (CPU). CPU implements several
types of tests (Test mode) that are used for diagnostics of
devices and defectacion of faulty modules within the device.
The following parameters, which are the product of the
device's operations (ie, monitored parameters), are available
in the MIB base (Management Information Base), [1]:

E
thern
et

results of testing via some of the system tests for


correctness defectation,
the level of the received signal
probability of error in the reception signal BER (the bit
error rate),
the level of the transmitting signal,
the ratio between the active and the reflected power,
module power supply voltage,
device power supply voltage.

8x

9x

1x

2x

3x

10x

11x

12x

7x

8x

9x

4x

5x

6x

1x

2x

3x

10x

11x

12x

4x

5x

6x

7 8 9 101112
1 2 3 4 5 6

Figure 1. Schematic diagram of access to radio-relay device


The availability of the mentioned data allows the qualified
person responsible for maintenance to have insight into the
correctness of the device and the parameters necessary for a
decision on further required actions in the field of
maintenance. Transfer of current data via radio-relay link is
achieved by communication between two radio-relay
devices. SNMP (Simple Network Management Protocol) IP
(Internet Protocol) was implemented in the conducted
research for purposes of monitoring and managing the
operations of the device. Communication between the
computer and the device was achieved using the software:
SNMP agent and the SNMP manager.

In addition, system parameters are available as well:

7x
C

transmitting frequency,
receiving frequency
type of modulation,
bit rate,

network addresses LINK and DROP, etc.


Preventive maintenance of the sophisticated device is
implemented twice within one year period. The goal of
preventive maintenance is to determine the functional
correctness of the device. In accordance with the concept of
maintenance of these types of devices, functional
correctness is determined by the implementation of TEST
mode under name: ''FIXED PARAMETER TEST''. The test
results are kept in the MIB base and they are available for
further use as one of the forms of maintenance.

The determined technology model was applied in the


research of optimal maintenance organization of digital
radio-relay devices. It is assumed that the preventive
maintenance is realized on basis of data obtained from
device's MIB database, without the need for the qualified
person's presence at the site of the device, thus gaining the
speed for the preventive maintenance implementation,
which directly affects the high level of readiness parameter.
Also, given that there is no movement of qualified people or
devices, all forms of costs are significantly reduced, thus
minimizing the maintenance costs criteria. Corrective
maintenance is implemented in the same manner.

In addition, preventive maintenance is conducted within a 5


year period, or 18 000 hours of work, by means of testing
the device by special measuring equipment (TEST
STATION) [2]. This equipment provides a detailed
overview of all the important parameters of the module and
the device itself, on the basis of which further actions in the
field of maintenance can be taken.

3. IMPLEMENTATION OF APPLICATIONS
Application for reading the MIB database, after the device
has been accessed via SNMP protocol, was developed by a
standard, general-purpose program, VB.NET. Also, tools for
communicating with the SNMP protocol are provided. The
communication with the protocol was achieved via ActiveX
(agent) IP Works S/SNMP V8 in the conducted research.

If the implementation of preventive maintenance shows a


functional failure, that leads to the implementation of
corrective maintenance. The initial phase of corrective
maintenance consists of implementing the following forms
of testing the device by means of TEST mode: to realize the
following types of testing devices TEST mode: FIXED
PARAMETER TEST, LOOP SELECTION (RF LOOP,
BB2 LOOP, BB1 LOOP and LOOP BACK) and FRONT
PANEL TEST. The obtained test results are stored in the
MIB database and closely localize a faulty module, which is
followed by the faulty mode replacement and retesting.

The type of communication (link or computer network) and


the device to be accessed for purposes of obtaining the test
results and monitored parameters of the device is selected
through the application shown in the figure
(Fig.2). In a similar way, resolved and applications for
setting and reading of parameters, before and after testing.

716

OTEH2016

MODELOFIMPROVINGMAINTENANCEOFTELECOMMUNICATIONDEVICES

Figure 2. The selection of communication type and the device in radio-relay network
Based on all available data obtained through radio-relay link
or a protected computer network one can make high-quality
decisions about the functional correctness of the device.
Also, it allows for the realization of the actions of
preventive and corrective maintenance. Also, the
implementation of preventive and corrective maintenance
actions is enabled.

4. PRESUMED MAINTENANCE MODEL


Radio-relay devices operate within the radio-relay systems
in both stationary and mobile versions. Accordingly, the
research presumed the maintenance model according to the
territorial distribution, as shown (Fig.3). Necessary
mathematical calculations will be carried out for the
presumed distribution of radio-relay system as in figure, in
order to solve the multi-criteria problem. The solution of the
multi-criteria problem will ensure the optimal maintenance
organization for this type of devices.

Figure 3. Presumed maintenance model


It is assumed that there are 3 operating devices at each
location. Also, the average operating time is 16 hours a day
per device. The observation period was 10 years.

5. PRESUMED VARIANTS OF
MAINTENANCE ORGANIZATION

Codes L1, L2, L3, L4, L5 and L6 are territorial locations in


which the radio-relay systems are housed. Also, the location
L1 represents the headquarters of professional bodies in
charge of the implementation of preventive and corrective
maintenance of devices.

The research presumed maintenance organization variants


(alternatives) in accordance with determined technology
model, according to Table 1.

Table 1. Presumed variants in accordance with the determined maintenance technology


TYPES OF MAINTENANCE ORGANIZATION - ALTERNATIVES DESIGNATION
In accordance with the determined model of the maintenance technology, devices from each of the locations
(L2-L6) need to be brought to location L1, in order to implement preventive maintenance. Technical
V1
1
inspection by the qualified person is performed at the location L1. If found to be defective, corrective
maintenance actions must be implemented.
In accordance with the determined model of the maintenance technology, a qualified person goes to each of
V2
2 the locations and performs technical inspection on the device. If found to be defective, corrective maintenance
must be implemented.
In accordance with the determined model of the maintenance technology, data regarding the functional
correctness of the device are available to the qualified person stationed at location L1, through the radio-relay
link or a computer network. Based on the available data from the MIB database, the qualified person
V3
3 implement further actions of preventive maintenance. If found to be defective, corrective maintenance actions
are implemented. Corrective maintenance implies additional device testing, in accordance with the determined
technology model, in order to locate the faulty module and apply the corrective maintenance at the module
level.

717

OTEH2016

MODELOFIMPROVINGMAINTENANCEOFTELECOMMUNICATIONDEVICES

6. DEFINING CRITERIA FOR VARIANTS


EVALUATION

variant of maintenance organization, in accordance with the


presumed maintenance model from Figure 3.

The objective is to assess (evaluate) the presumed variants


of maintenance organization as realistically as possible and
thereby create conditions for the best selection.

7. MATHEMATICAL MODEL
IMPLEMENTATION
The selection of the optimum variant of maintenance
organization, which optimizes maintenance of the device,
was conducted using the method of multi-criteria analysis
PROMETHEE. Explain how implementation of
PROMETHEE methods in solving the problem of multicriteria described in the literature [5,6,7,8,9].

For the purposes of mutual comparison of the proposed


variants of the devices maintenance organization, the
following criteria will be used, [3,4]:
readiness - G K1,
average maintenance time - tO (h ) K2,
maintenance costs - C C (din.) K3,

The implementation of the mathematical model of selecting


the optimum maintenance variant begins by defining the
problem as shown (Fig. 4).

quality of work K4.


The values of criteria are determined for each proposed

Figure 4. Basic parameters in PROMETHEE II method


In the conducted research, based on the presumed
maintenance model from Figure 3, each criterion for
assessment of maintenance variants had to be calculated.
Criterion readiness G (K1) is calculated in the following
way (1):
G=

tR
,
t R + tO

where:
tor - is organizational time, which is necessary to report
state "to repair'' to the relevant workshop to execute this
action,
tt - is the time necessary to transport the device to the
workshop, in order to undertake maintenance procedures,

(1)

ttj - is the time necessary to return the device from the


workshop to the user,

t R - working time of a device at each of the

tod - is the duration of the maintenance procedures.

locations Li during the observed period


t R = t R1 + t R 2 + .... + t R 6 ,

Other criteria (K3 i K4) are calculated for the conditions of


implementation of preventive and corrective maintenance in
respect to territorial distribution of devices. The obtained
results of criteria calculations according to variants are
shown in Table 2.

(2)

tO - mean time to repair at each of locations Li and


tO = tO1 + tO 2 + .... + tO 6 .
(3)

The calculation of the values of the input and output streams


of presumed maintenance variants has been conducted
(actions) [10,11,12]. The obtained result is shown in
Table 3.

Mean time to repair to (K2) is calculated in the following


way:
tO = tor + tt + tod + ttj ,

(4)

718

OTEH2016

MODELOFIMPROVINGMAINTENANCEOFTELECOMMUNICATIONDEVICES

Table 2. Criteria calculations for presumed variants


Criteria

Variants

max

min

min

max

Variant V1

0,906

115 418,00

3 080 000,00

Variant V2

0,985

17 062,00

1 680 000,00

Variant V3

0,993

8 011,00

280 000,00

Weights

0,30

0,20

0,30

0,20

Tip preferences

''Linear''

''Linear''

''Linear''

''Regular''

''m'' indifference

0,036

44 386,2

659 966,30

''n'' preference

0,094

115 990,00

2 526 633,0

On the basis of difference of the input and output streams of


the action ( T + , T ), values of clean streams ( T ) and
ranking of the variants of maintenance organization are
shown in Table 4.

Table 3. Values of the input and output streams of actions


Variant V1 Variant V2 Variant V3

T+

Variant V1

0,42

0,21

Variant V2

0,426

0,264

0,345

Variant V3

0,32

0,379

0,35

0,373

0,4

0,132

After ranking the variants of maintenance organization by


means of PROMETHEE II method, and on the basis of
presumed criteria, the optimum variant of maintenance
organization is V3. The decision maker in the research put
emphasis, through the weight coefficients, on criteria of
''readiness'' (K1) and ''maintenance costs'' (K2). He
evaluated the remaining two criteria (K3 and K4) with the
same weight coefficients. On the basis of mathematical
calculations and the results obtained in Table 4 the best
ranked is the third variant of maintenance organization
(Fig. 5).

Table 4. Values of clean streams


T+

RANK

Variant V1

0,21

0,373

-0,163

Variant V2

0,345

0,4

-0,055

Variant V3

0,35

0,132

0,219

3
2
1

Figure 5. Ranking of maintenance organization variants

By applying the software solution PROMETHEE method,


the same results were obtained and they are presented in
Table 5 and graphically in Figure 6 [13].

The results of solving the multi-criteria problem of selecting


the optimal variant of radio-relay devices maintenance
organization, obtained mathematically, indicate that the
third variant of maintenance organization (V3) has the
optimum value of focal criteria: the highest value of
readiness criterion ( G = 0,993 ) and the lowest value of
maintenance costs ( C C = 280 000,00 din ). Following the
completion of the budget, the value of clean streams is the
largest fir this maintenance variant ( T = 0,219 ).

The conclusion is that the determined model of digital radiorelay devices maintenance technology via radio-relay link
and computer network (point 3 and 4), which was presented
by variant V3 in the conducted research, makes the
maintenance organization of this type of devices optimal.
Optimality is reflected in the rapid and reliable
implementation of planned surveys of device's parameters,
based on which the decision about the functional correctness
is made.

Table 5. Values of clean streams and maintenance


variants evaluation
RANK VARIANT

1
2
3

Phi

III variant 0,3294


II variant 0,0866
I variant -0,416

Phi+Input
stream

Phi-Output
stream

0,4294
0,2460
0,2000

0,1000
0,1595
0,6160

719

MODELOFIMPROVINGMAINTENANCEOFTELECOMMUNICATIONDEVICES

OTEH2016
methods, applications in logistics, software. Centre for
Military Higher Education, Belgrade, 1996.
[4] Radonjic,V., ukic,S., Jovanovic,D., Ciric,M.,
Petrovic,S.: Implementation of analytic hierarchy
process method in the selection of the optimal
maintenance organization for specific purpose
devices.//Serbian Journal of Electrical Engineering, 11,
3(2014), pp. 491-500.
[5] Brans,J.P., Mareschal,B.: PROMETHEE methods.
Multiple criteria decision analysis: state of the art
surveys. //Springer New York, (2005), pp.163-186.
[6] Davignon,G., Mareschal,B.: Specialization of hospital
services in Quebec An application of the
PROMETHEE and GAIA methods.// Mathematical and
Computer Modelling, 12, (10-11) (1989), pp. 1393
1400.
[7] Brans,J.P., Mareschal,B.: How to select and how to
rank projects: The PROMETHEE method. // European
Journal of Operational Research. 24, 2 (1986), pp.
228238.
[8] Brans,J.P., Mareschal,B.: The PROMETHEEGAIA
decision
support
system
for
multicriteria
investigations. //Investigation Operativa, 4, 2(1994),
pp. 107 117.
[9] Radonjic,V., Djukic,S., Jovanovic,D., Ciric,M.:
Improving a technology model for intermediate
maintenance of radio-relay devices.// Military Tehnical
Courier/Vojnotehniki glasnik, 4, 63(2015), pp. 25-45.
[10] Tomic,V., Marinkovic,Z., Janosevic,D.: PROMETHEE
method implementation with multi-criteria decisions,
//FACTA UNIVERSTATIS, Serise: Mechanical
Engineering, 9, 2(201), pp. 193-202.
[11] Radonjic,V., Petrovic,S., Andrejic,M., Jovanovic,D.:
Selection of the optimal maintenance organization for
special purpose devices by using the method of
multicriteria analysis PROMETHE II //17th
International Conference Dependability and Quality
Management ICDQM 2014. / Belgrade, 2014, pp. 817823.
[12] Radonjic,V., Ciric,M., Jovanovic,D.: One model of
assistance on the lower levels in the maintenance of
modern radio-radios GRC systems, 5th International
scientific conference on defensive technologies OTEH
2012. / Belgrade, 2012, pp. 383-388.
[13] VP Solutions, Visual PROMETHEE 1.4 Manual,
2013,URL:http://www.promethee-gaia.net/files/
VPManual.pdf. (21.06.2014)

Figure 6. Results of variants ranking by means of


software

10. CONCLUSION
Complete ranking of alternatives (variants of maintenance
organization) by means of PROMETHEE II method showed
that III variant (V3) of maintenance organization is best
ranked, and successively II (V2) and in the end I variant
(V1) of maintenance organization.
Based on the results obtained by mathematical and software
means, it can be concluded that the determined maintenance
technology is justified. Namely, at III variant (V3) the
maintenance technology was determined on basis of
structural and technical characteristics of the device. It is
based on obtaining the necessary maintenance parameters
via radio-relay link or computer network. This maximizes
the value of readiness criterion, while reducing the
maintenance costs. Together, they contribute to making the
maintenance of this kind of devices optimal.
Solving the problem of selecting the optimal maintenance
organization in accordance with the determined model of
maintenance technology, has been to the scientific
framework. The variant which makes the devices
maintenance optimal was selected by means of multi-criteria
analysis method, PROMETHEE II.

11. References
[1] Opricovic,S.: Multi-criteria optimization, Scientific
books, Belgrade, 1986.
[2] Pesic,Z.: Technology maintenance of motor vehicles.
Military Publishing House, Belgrade, 2009.
[3] Nikolic,I., Borovic,S.: Multi-criteria optimization

720

USAGE AN INFRARED THERMOGRAPHY FOR THE PROCESS


CONDITION-BASED MAINTENANCE OF SHIPS SYSTEMS
VESELIN MRDAK
Technical Test Center, Belgrade, mrdak.v@ikomline.net

Abstract: This article focuses on the advances infrared thermography as a non-contact and non-invasive condition
monitoring tool as a whole and especially for the process condition-based maintenance of ships systems. Infrared
thermography has become widely accepted condition monitoring the tool for various technical subjects. Running infrared
thermography for condition monitoring ships systems had accepted from LLOYD as authorization diagnostic technics.
Accreditation companies conduct thermography`s measures ships systems in the purposes condition-based maintenance.
Technical Test Center (TTC) has begun to use termographics screening technical systems for condition-based maintenance.
For the task diagnostic measures in military ship BPN-30, TTC has realized IRT screening ship systems. Experiences in the
first use IRT for diagnostics measures ship systems will be also discussed in this article.
Keywords: infrared thermography, condition-based maintenance, diagnostic measures, ship systems
minimal instrumentations ( an infrared camera, a tripod or
camera stand and a video output unit for displaying the
acquired infrared thermal images);

1. PREFACE
The purpose and aim of diagnostic measurement for the ship
and ship systems maintenance process is to achieve the
highest level of operational safety marine systems at the
lowest cost. This paper presents the usage infrared
thermography for the process condition-based maintenance
of ships systems.

provide a visual picture of thermal image of the entire


component or machinery;
the approved axiom that monitoring the temperature of
machineries or a process is undoubtedly one of the best
predictive maintenance methodologies;

The modern methods and principles of maintenance based


upon the development and installation of diagnostic
software agents (intelligent information and diagnostic
systems). One of these modern diagnostic techniques is the
infrared (IR) thermography as a non-contact and noninvasive condition monitoring tool. Acting the IR
thermography in marine applications described in few
articles, which mentioned in this paper.

the possibility of accession for the main ship system and


getting a thermal image of the entire component or
machinery;
the possibility of intelligent image analysis system and
automated fault detection and localization system;
the possibility of combining with other diagnostic
measurement techniques as vibration ("vibration
monitoring"), hits ("shock pulse monitoring"), leak
detection and inadequate sound ("noise and leak
detection"), thickness control of certain parts and cracks
("thickness measure monitoring and crack detection"),
and so on.

2. GENERALLY OF IR THERMOGRAPHY
Origin of IR thermography in marine applications were
owing to Lloyds Register prediction: In the near future,
mechanical machinery onboard vessels will also benefit
from thermal imaging, especially as a pre-docking strategy
to identify and target equipment and systems which need
attention as well as to eliminate necessary work. This
Lloyds Register prediction had issued in 2004.

The disadvantages are not numerous, and they reflect in


following:
The

impossibility of very accurate temperature


measurements of entire component or machinery;

The main advantages of IRT in a process a diagnostic


measurement for the ship and ship systems maintenance
process are:

the requisite for well trained, qualified and assessed


personal, what is in accordance with ISO 18436-7;
the limitation in that radiometric sensing is susceptible to
unacceptable error when used on most low emissivity
surfaces.

IRT is remote, non-contact and non-invasive technique;


response speed of measuring, the measured energy travels
from the target to the sensor at the speed of light, and then
the response of the instrument can then be in milliseconds
or even microseconds;

The bases of theory of IRT are rest upon the next laws:
L

recording of dynamical variations of temperature in real


time;
721

c1

5 [ exp ( c2 T ) 1]

- Plancks law

USAGEANINFRAREDTHERMOGRAPHYFORTHEPROCESSCONDITIONBASEDMAINTENANCEOFSHIPSSYSTEMS

OTEH2016

temperature in the field of view, which can be measured


by the infrared camera.

q
= T 4 - StefanBoltzmanns law
A

maxT = 2897.7 m K - Wiens displacement law


Where are:

- the wavelength of the radiation (lm


L
- the power radiated by the blackbody per unit
surface and per unit solid angle for a particular
wavelength (Wm2m1sr 1)
T
- the temperature in absolute scale (K)
c1 and c2 - the first and second radiations constants
q
- the rate of energy emission (W)
A
- the area of the emitting surface (m2)

- the StefanBoltzmanns constant


( = 5.676 108Wm2 K4)
e
- is the emissivity of the emitting surface for a
fixed wavelength and absolute temperature T

Temperature range is the maximum and minimum


temperature values, which can be measured using an
infrared camera.
Frame rate is the number of frames acquired by an
infrared camera per second.
Most of IRT cameras today come with their analysis
software and have the capability to prepare the inspection
report including the commercial stand alone software.
However, despite multi functions and ease to use of the
software, the evaluation process are very time consuming
for a manually analyzing and preparing the report even
conducted by a qualified or experienced personnel [4].

Theory of usage an infrared thermography in the process


condition-based maintenance of technical system, in short:
All objects with temperature above 0 K (273C) emits
electromagnetic radiation in the infrared region of
electromagnetic spectrum. Infrared radiation (wavelength in the
range of 0.751000 m) is positioned in-between microwave
and visible part of the electromagnetic spectrum. This vast
range can be further subdivided into near infrared or NIR
(0.761.5 m), medium infrared or MIR (1.5 5.6 m) and far
infrared or FIR (5.61000 m). [1]
In order to use a thermography in the process conditionbased maintenance of technical system, it is necessarily to
know principal operation and in general of infrared camera.
The invention and production newer generations of infrared
camera have enabled measures the temperature of a body
remotely and provides the thermal image of the entire
component or machinery. In the article Infrared
thermography for condition monitoring a review, it
presents of development of infrared cameras. Infrared
cameras have undergone several modifications during the
last few decades. The first generation cameras consisted of a
single element detector and two (one horizontal and one
vertical) scanning mirrors. In the more advanced secondgeneration cameras, two similar scanning mirrors along with
array detectors (a large linear array or a small twodimensional array) were used. The modern third generation
cameras are without mirrors and have large two dimensional
array detectors (popularly known as focal plane arrays:
FPA) [48,49]. Several on-chip image enhancement
techniques like time delay integration are also implemented
in these modern cameras, which increase the resolution and
sensitivity of the systems. The old technology systems have
lower spatial resolution, higher noise levels, smaller
dynamic range, limited data storage capabilities and without
onboard image processing. [1].
Parameters that must be considered before choosing an
infrared camera are:
Spectral range is the portion of the infrared spectrum in
which the infrared camera will be operationally active.
Spatial resolution is the ability of the camera to distinguish
between two objects within the field of view.
Temperature resolution is the smallest difference in

Picture 1. Uncooled optical-readable IR imaging system:


(a) schematic diagram and (b) components of the thermal
imager [5]

3. APPLICATIONS OF IRT FOR DIAGNOSTIC


MEASURMENT IN THE PROCESS
CONDITION-BASED MAINTENANCE
Applications of IRT in the process condition based
maintenance are in fact numerous. In the article [1], it
described in detail applications of IRT in the process
condition based in varied technical fields. IRT has been
successfully utilized for several condition monitoring
applications such as civil structures , inspection of electrical
equipment, monitoring of plastic deformations , inspection
of tensile deformation, evaluation of fatigue damages in
materials, inspection of machineries, weld inspection,
monitoring of electronic printed circuit boards (PCBs) and
evaluation of chemical vapor deposition process. IRT has
also been utilized in nuclear, aerospace, food, wood, highlevel current density identification over planar microwave
circuit sectors and paper industries. Basics of IRT
methodology, operating principle of infrared camera and
applications of IRT in building envelope inspection, roof

722

USAGEANINFRAREDTHERMOGRAPHYFORTHEPROCESSCONDITIONBASEDMAINTENANCEOFSHIPSSYSTEMS

inspection, electrical and mechanical inspection, detection


of buried objects, surveillance, process control and
condition monitoring of power distribution systems have
been discussed in detail by Holst. Applications of IRT for
condition monitoring purposes in various fields like nuclear,
electrical, PCBs, aerospace, civil, etc. are also described by
Reeves and Maldague. Origin of IRT, development of
infrared detectors, perspective of IRT applied to building
science, application of IRT to thermofluid dynamics and
combustion systems are well described in a recent book
edited by Meola [1]. For purpose this article, it is useful to
explain the mode usage of IRT for diagnostic measurement.
The standard ISO 18434-1:2008(E) Condition monitoring
and diagnostics of machines Thermography defines the
demands for thermograph measures of entire component or
machinery. his standard contains:
introduces the terminology of IRT as it pertains to
condition monitoring and diagnostics of machines;
describes the types of IRT procedures and their merits;
provides guidance on establishing severity assessment
criteria for anomalies identified by IRT;
outlines methods and requirements for carrying out IRT
of machines, including safety recommendations;
provides information on data interpretation, and
assessment criteria and reporting requirements;
provides procedures for determining and compensating
for reflected apparent temperature, emissivity and
attenuating media [8].
This part of ISO 18434 also encompasses the testing
procedures for determining and compensating for reflected
apparent temperature, emissivity and attenuating media
when measuring the surface temperature of a target with a
quantitative IRT camera.
Method of usage IRT depends on the object, which is
measured. In general, there are two main method of usage
IRT: comparative quantitative thermography and
comparative qualitative thermography. The comparative
quantitative thermography method serves for evaluating the
condition of a machine or component by determining
approximate temperatures. The standard issue out that is
very difficult to determine precisely the actual temperatures
of a component using IRT in the field. This is due to a
certain extent to the physics of IRT that must take into
consideration the multiple parameters that enable a true
absolute temperature measurement. These IRT
considerations
are
emissivity,
reflectivity
and
transmissivity. The standard present as an example of
comparative quantitative thermography would be that, if
two or more machines are operating in the same
environment and under the same load conditions, and one is
experiencing an elevated temperature, this is usually an
indication that a deteriorating condition may exist.
However, the determination of the temperature difference
would then assist in establishing the severity of the
condition. In this example, a 5C differential would be
considered minor, whereas a 100C differential may be
considered to be critical. Also, knowing the approximate
value of the elevated temperature would provide an
indication that the temperature

OTEH2016

limit of a component may be approaching published values.


Therefore, while qualitative measurements can also detect
deficiencies, it is the quantitative measurements that have
the capability of determining severity [8].
Comparative qualitative measurement compares the thermal
pattern or profile of one component to that of an identical or
similar component under the same or similar operating
conditions. When searching for differing thermal patterns or
profiles, an anomaly is identified by the intensity variations
between any two or more similar objects, without assigning
temperature values to the patterns. This technique is quick
and easy to apply, and it does not require any adjustments to
the infrared instrument to compensate for atmospheric or
environmental conditions, or surface emissivity. Although
the result of this type of measurement can identify a
deficiency, it does not provide a level of severity [8].
During of thermographic measurements, it is necessarily to
respect demands standard ISO 18434 for baseline
measurements, safety, calibration, date collection, customer
responsibility, assessment criteria of temperature severity,
profile assessment criteria, diagnosis and prognosis, test
report and qualification of personnel. The standard ISO
18434 contents the annexes: A.1 How to measure reflected
apparent temperature, A.2 How to measure the emissivity of
a target, B Example safety rules and guidelines and C Case
history examples. The demands and annexes of standard
ISO 18434 contents adequately initial information for
thermography measurements. For conducting thermographic
measurements on particular technical system (ship system),
it is also needful to know a principle operation that technical
system (ship system).
The article [1] consists a great number useful information on
IRT. Especially on: experimental methodologies for IRT
based condition monitoring, data analysis methods for IRT
based condition monitoring applications, applications of
IRT in condition monitoring, monitoring of civil structures,
monitoring of electrical and electronic components, weld
monitoring, deformation monitoring, inspection of
machineries, corrosion monitoring, application of IRT in
nuclear industries IRT based condition, monitoring in
aerospace industries and other applications. For these
specific scopes, authors gave in detail information and
standards, which are requested for conducting condition
monitoring.
Table 1 shows the principal causes behind major accidents.
Table 1. Principal causes behind major accidents [1]
Causes
Frequency
Mechanical failure
38
Operational errors
26
Unknown/miscellaneous
12
Process upset
10
Natural hazards
7
Design errors
4
Arson/sabotage
3
It is evident from the table that mechanical failure causes
38% of all major accidents, which stress the importance of
efficient condition monitoring practices [1].

723

USAGEANINFRAREDTHERMOGRAPHYFORTHEPROCESSCONDITIONBASEDMAINTENANCEOFSHIPSSYSTEMS

In table 2 showed the peak wavelength of the emission


spectrum for different ranges of temperatures and the
corresponding events. It is evidentially that up to 3864 K the
emitted radiation falls within the infrared regions.
Table 2. Wavelength of the peak of the emission spectrum
of a typical blackbody at various absolute temperatures [1]
Wavelength
Temperature
Physical significance
of the peak
(K)
(m)
Lower limit of infrared
3864
0.75
region
1811
Melting point of iron
1.60
Eutectic temperature of
1420
2.04
ironcarbon
Eutectoid temperature of
1000
2.89
ironcarbon
933
Melting point of aluminum
3.10
373
Boiling point of water
7.77
303
Room temperature
9.56
273
Ice temperature
10.61
Liquefaction point of
77
37.63
nitrogen
For a diagnostic measurement in the purpose predictive ship
maintenance monitoring the temperature of ship systems are
one of the best predictive maintenance methodologies. Ship
system which were typical measuring with IRT are: switch
boards in engine control rooms, emergency generators,
marine engine, thrusters, accommodation fans/air
conditioners, lifts, bridge consoles, thrusters, light and
power distributors, etc. In the article [1] gave the table
which shows types infrared cameras used by different
research groups for various condition monitoring
applications. Also in article [1] schematic representation of
passive and active thermography techniques along with
various applications.

OTEH2016

downtime. Faults in electrical power systems can be


classified into a few categories, such as poor electrical
connections, short or open circuits, overloads, loads
imbalance and improper equipments installation. In most
cases, poor electrical connections are among the common
problems in transmission and distribution lines of electrical
power systems [4]. According to an IRT survey conducted
during the period of 19992005, it was found that almost
halves of the thermal problems were found in conductor
connection accessories and bolted connections. The
problems are mainly caused by loose connections,
corrosion, rust, and non-adequate use of inhibitory grease.
This kind of problem can be recognized by inspecting the
heat pattern via IRT camera where the highest temperature
point indicates the location of the problem. Article [4] gives
as an example of oxidized connection of a miniature circuit
breaker with its corresponding infrared image. The
oxidation cable connection has caused a hot spot
temperature exceeding to
76.6C during the time of inspection and has resulted a
burning sign. This condition requires an immediate attention
and repair. It is recommended that, if oxidation and arcing
damage cannot be repaired, parts and cable should be
replaced [4].

4. THERMOGRAPHIC TESTING OF SHIP


SYSTEMS AND DEVICES
In this article, presents thermographic monitoring of ship
systems and devices. Thermographic monitoring was
conducting ago the sailing and several times during the
sailing as follows at: 7.40 h, 9.15 h, 12.10 h, 15.10 h 17.15
h.. It was used comparative qualitative measurement
according to ISO 18434-1:2008. Thermographic
photography was performed on following systems and
devices:

frequency converters (on a several position) ,


electric propulsion motors (on a several position),
reduction gears (on a several position),
elastic treads between electric propulsion motors and
reduction gears,
journal bearing of interdrivershaft,
after stern tube bush,
diesel engine generators and,
quagmire and firefighting pumps.

The
measurements
were
conducting
qualified
thermographers (a master of science and a magistars study)
with infrared cameras FLIR SC 620 (operating range 8m 12m, with broadband lens of 45). On infrared cameras
FLIR SC 620 was adjusted following parametars:

Picture 2. Schematic representation of passive and active


thermography techniques along with various applications [1].
In the same article [1] gave the review of emissivity of some
common materials (when wavelength is not specified, it
covers the entire MIR and FIR range) and the review of
relevant standards for condition monitoring applications.
Thermographic scans of electrical installations are possible
while systems are operative, without causing costly

the distance of the object under investigation 1m,


reflected temperature 20,
ambient temperature 15 - 20,
emissivity = 0,95,
operating temperature range 0-500.

On picture 3. marked the areas in which have been observed


greatest, least and average temperature on frequency
converters and electric propulsion motors, on tables 3. and
4. showed variations of temperature on these objects.

724

USAGEANINFRAREDTHERMOGRAPHYFORTHEPROCESSCONDITIONBASEDMAINTENANCEOFSHIPSSYSTEMS

Picture 3. Thermograph of electric propulsion motor and


frequency converter on the left shaft vessel shafting
Also on pictures 4, 5 and 6 presented thermographs, which
had made during thermographic monitoring following
systems and devices: after stern tube bush, elastic treads
between electric propulsion motors, reduction gears, and
exhaust manifold of disel engine. On figures presented
specific characteristics in progres ship systems running such
as: sealing liquid dribbling on after stern tube bushes and
improper overheat on exhaust manifold.

OTEH2016

Picture 4. Thermograph of after stern tube bushs on left


and right shaft vessel shafting (presention of sealing
liquid dribbling)

Table 3. Temperatures of left electric propulsion motors


Left electric
propulsion
motors
Initial state
The secend
monitoring
The third
monitoring
The forth
monitoring

least
greatest
average
temperature temperature temperature
C
C
C
23.4
26.1
25.1
50.7

57.1

54.6

74.6

83.8

80.2

81.1

93.4

89.1

Picture 5. Thermograph of elastic treads between electric


propulsion motors and reduction gears on left shaft vessel
shafting

Table 4. Temperatures of right electric propulsion motors


Right electric
least
greatest
average
propulsion temperature temperature temperature
motors
C
C
C
Initial state
23.4
26.1
25.1
The secend
48.3
52.5
51.1
monitoring
The third
66.9
71.7
70.0
monitoring
The forth
71.0
78.2
75.5
monitoring

Picture 6. Thermograph of disel engine exhaust manifold


on the left shaft vessel shafting (presention of improper
overheat on exhaust manifold)
725

USAGEANINFRAREDTHERMOGRAPHYFORTHEPROCESSCONDITIONBASEDMAINTENANCEOFSHIPSSYSTEMS

On the basis of thermographic monitoring results, the


conclusions are:
During thermographic scanning and monitoring of electric
propulsion motors and frequency converters for sailing, it
were not passing on the exceeding regular operating
temperatures of these devices;
It is established the boxes temperature difference of right
and left electric propulsion motors, which were reaching
on the forth measuring at an average 13.5C. Herewith is
confirmed temperature difference of right and left electric
propulsion motors, which is showed on the display
devices of frequency converters.
On thermographic scanning of others ship systems and
devices on the right and left shaft vessel shafting, it did
not notice considerably temperature difference or
temperatures, which had indicated on improper ship
systems and devices running.

5. CONCLUSION
Infrared thermography is a new and a contemporary
diagnostic measuring method for the process conditionbased maintenance of ships plants, systems and equipment.
From this article, it can see that IRT have accepted and
confirmed method for the process condition monitoring and
condition-based maintenance of varied technical systems.
For many technical fields, it defined standards that
determinate a mode of application IRT for the process
condition-based maintenance. In the first part of this article
gave the review and lot of information about records of time
application IRT that derived from quoted articles. In the
second part of this article, it showed an application IRT for
process diagnostic measuring for the process conditionbased maintenance of ships plants, systems and equipment.
IRT diagnostic measuring generated on the basis ISO
18434-1:2008(E) Condition monitoring and diagnostics of
machines Thermography Part 1: General procedures.
In this article has described the first application IRT for the
process condition-based maintenance of ships plants,
systems and equipment, which is very important for future
usage. The conclusion is upwards of positive. IRT
diagnostic measuring was proved as a practical, easy to use
and undoubted qualitative. It is usefully to combine IRT

OTEH2016

quantitative and qualitative comparative techniques, since it


gets a commendable indicators ship systems and devices
running. Furthermore, with a view to systematize date about
thermografic diagnostic measuring, the needs are to create
and develop a database of thermografic measuring ships
plants, systems and equipment. Periodic thermograph
measuring, which combine to other techniques of diagnostic
measuring and exploiting a developing date base of
diagnostic measuring ships plants, systems and equipment,
it would be developed an efficient system of conditionbased maintenance of ships plants, systems and equipment.
References
[1] S. Bagavathiappan, B.B. Lahiri, T. Saravanan, John
Philip, T. Jayakuma, Infrared thermography for
condition monitoring A review, Infrared Physics &
Technology 60 (2013) 3555;
[2] Vladimir Paagi, Marijan Muevi, Dubravko Kelenc,
Infrared Thermography in Marine Applications,
Brodogradnja, 59(2008)2, 123-130;
[3] Homin Song, Hyung Jin Lim, Sangmin Lee, Hoon
Sohn, Wonjun Yun, Eunha Song, Automated detection
and quantification of hidden voids in triplex bonding
layers using active lock-in thermography, NDT&E
International 74 (2015) 94105;
[4] Mohd Shawal Jadin, Soib Taib, Recent progress in
diagnosing the reliability of electrical equipment by
using infrared thermography, Infrared Physics &
Technology 55 (2012) 236245;
[5] A. Rogalski, Recent progress in infrared detector
technologies, Infrared Physics & Technology 54
(2011) 136154;
[6] Antoni Rogalski, Infrared detectors: an overview,
Infrared Physics & Technology 43 (2002) 187210;
[7] Gang-Min Lim, Dong-Myung Bae and Joo-Hyung
Kim, Fault diagnosis of rotating machine by
thermography method on support vector machine,
Journal of Mechanical Science and Technology 28 (8)
(2014) 2947~2952;
[8] ISO 18434-1:2008(E) Condition monitoring and
diagnostics of machines Thermography Part 1:
General procedures.

726

VOLUMETRIC CALIBRATION FOR IMPROVING ACCURACY OF


AFP/ATL MACHINES
SAMOIL SAMAK
Institute for Advanced Composites and Robotics, Prilep, ssamak@iacr.edu.mk
IGOR DIMOVSKI
Institute for Advanced Composites and Robotics, Prilep, igord@iacr.edu.mk
VLADIMIR DUKOVSKI
Faculty of Mechanical Engineering, Ss. Cyril and Methodius University, Skopje, vladimir.dukovski@mf.edu.mk
MIRJANA TROMPESKA
Institute for Advanced Composites and Robotics, Prilep, mirjanat@iacr.edu.mk

Abstract: Automated Fiber Placement (AFP) and Automated Tape Laying (ATL) technologies are mostly used in
aerospace industry. Deviations from predefined position and orientation of the AFP/ATL machines end-effector may
cause defects of the final product like gaps and laps of the laminate ply, tow end placement errors, pressure and
temperature variations, etc. That makes clear the importance of accuracy of AFP/ATL machines. Calibration is needed
to enhance accuracy.
Development and implementation of a comprehensive procedure for volumetric calibration of three linear axes is
described in this paper. According to ISO 230-1:2012 and ISO 230-2:2014 standards, 18 position dependent and 3
position independent (in total 21) errors of the 3 linear axes are considered. Measurements are performed using laser
interferometer on ATL machine produced by company Mikrosam. Obtained data are used for calibration of that
machine and validity of the results is verified by comparison with the calibration results obtained by TRAC-CAL
software developed by ETALON AG.
Keywords: AFP/ATL, volumetric calibration, accuracy, geometric errors.

1. INTRODUCTION
Composites are often used in aerospace industry. Leading
aerospace companies have already made airplanes with
more than 50% of composites [1]. Because of that, it is
necessary to develop automated manufacturing of large
parts of composites for commercial and military airplanes.
AFP and ATL are the two crucial types of automated
machines for that purpose. Difference between them is
that Automated Fiber Placement (AFM) is more flexible
allowing control of the tape width - different fiber tows
could be cut in different time. That allows to use AFP
machine for placing prepreg on more complex surfaces
and to reduce the scrap, even to 5% [2]. Automated Tape
Laying (ATL) mainly uses wider prepreg tape. It is very
efficient for large parts with simple geometry.

Picture 1. Automated Fiber Placement

Picture 1 shows fiber placement and Picture 2 shows tape


lying.
AFP/ATL as technological process is related to the
processes:
Integration of the robot platform and AFP/ATL head
Tape path generation and trajectory planning
Process parameters control

Picture 2. Automated tape layup


727

OTEH2016

VOLUMETRICCALIBRATIONFORIMPROVINGACCURACYOFAFP/ATLMACHINES

In [3] more details about technological process can be


found.

To avoid such defects, it is very important to achieve high


precision and accuracy of the AFP/ATL machines. For
that purpose, comprehensive 3D volumetric calibration
procedure used for compensation 3 linear axes errors is
developed and presented in this paper.

The most sophisticated part of such machines is


AFP/ATL head. It has to contain [2]:
Prepreg delivery system
Cutting and restarting system
Roller and compression system
Heat regulation system

In the future will be researched the opportunity for


orientation errors calibration. Details about the
importance of AFP/ATL orientation accuracy are given in
[7].

Uniform working of compression system is important.


That means, it is not sufficient to control only the position
precisely, but as well the orientation of the AFP/ATL
head.

2. VOLUMETRIC ERRORS
In traditional calibration methods, mainly separated
measurements for each axis are performed and they are
compensated in the same manner. Compensation of the
axes are mutually independent, eventually some of the
influence on dependence of kinematic configuration is
included [1].

To be able to produce 3D forms with complex geometry,


AFP/ATL machine should be built as robotized system
with 6 degrees of freedom (DOF). 6-axes ATL machine
performed in gantry style, large as manufacturing of parts
for aerospace industry requires is shown on Picture 3. All
the measurements and testing the results for the
calibration algorithm described in this paper, are
performed on this machine.

Some of the contemporary calibration techniques,


developed for AFP/ATL machines are described in [6],
[8] and [9].
To enhance the position accuracy of AFP/ATL head, it is
necessary to include all of the 21 volumetric errors,
according to the standards ISO 230-1:2012 [10] and ISO
230-2:2014 [11], as well to the ISO technical report [12].
The volumetric errors appear due to some inaccuracies in
machine manufacturing or installation [13]. These 21
volumetric errors concern only to the translational axes
and their calibration can enhance accuracy only for the
position of the machines end-effector.
There are 18 position dependent geometric errors, 6 errors
for each axis and 3 position independent geometric errors
in a volumetric calibration procedure.

Picture 3. ATL machine, produced by Mikrosam


company

There are 3 displacement errors for each of the


translational axes X, Y and Z - in total 9 such errors.

There are several factors for the AFP/ATL process end


product quality: curvature variations of the surface used
for laying on, prepreg properties, temperature etc. [4]
Comprehensive review of defect types and the reasons for
their appearing could be found in [5].

Displacements along the same axis as the measurement is


performed are called positional errors (Picture 5.).
EXX(x), EYY(y) and EZZ(z)

(1)

Picture 4. Gaps and overlaps

Picture 5. Positional errors

Avoiding defects of laminates during AFP/ATL process


appeared due to deviations of the predefined position on
laying path is emphasized in this paper. Such defects are:
course/tape gaps, course/tape overlaps and tow/tape end
placement errors of the layup [6].

There are also 3 horizontal and 3 vertical straightness


errors.
The horizontal straightness errors (Picture 6.) are denoted
by:
728

OTEH2016

VOLUMETRICCALIBRATIONFORIMPROVINGACCURACYOFAFP/ATLMACHINES

EZX(x), EZY(y) and EYZ(z)

Pitch rotational errors (Picture 9.) are denoted by:

(2)

EBX(x), EAY(y) and EAZ(z)

The vertical straightness errors (Picture 7.) are denoted


by:
EYX(x), EXY(y) EXZ(z)

(5)

(3)

Picture 9. Pitch rotational errors


Yaw rotational errors (Picture 10.) are denoted by:
ECX(x), ECY(y) and EBZ(z)

(6)

Picture 6. Horizontal straightness errors

Picture 10. Yaw rotational errors


Picture 7. Vertical straightness errors

Nominally, translational axes are mutually perpendicular.


In practice, there are small deviations from the right
angle. Those deviations are called squareness errors
(Picture 11.) and they are denoted by:

For every of translational axes X, Y and Z, there are 3


rotational errors as well. They are also position
dependent. These 9 errors are classified in 3 groups: roll,
pitch and yaw errors.

SXY, SYZ SZX

Roll rotational errors (Picture 8.) are denoted by:


EAX(x), EBY(y) ECZ(z)

(7)

Squareness errors are position independent geometric


errors. They are expressed with 1 number for each of
them.

(4)

Picture 11. Squareness errors

Picture 8. Roll rotational errors

729

OTEH2016

VOLUMETRICCALIBRATIONFORIMPROVINGACCURACYOFAFP/ATLMACHINES

3. 3D VOLUMETRIC CALIBRATION
ALGORITHM
All the measurements of 21 volumetric errors were
conducted on 6 DOF ATL machine (Picture 3.), produced
by innovative company Mikrosam. A laser interferometer
was used. Its resolution is 1nm, declared linear error
between 2 and 3 m and measurement rang of 15m. The
experts from reputable German company AfM (Accuracy
for Machines) made all the measurements in Institute for
Advanced Composites and Robotics in Prilep, Macedonia
in October, 2014.
The measurement was time consuming and it was based
on strategy which includes planning of the tracer positions
and different combinations of reflector offsets and tool
paths. The number of measured reflector positions
depends on the statistical model for calculation the
uncertainties. Interferometer only measures lengths of the
beam for every tracer-reflector position in predefined
measurement configuration. All collected data from
measurement are used to obtain complete error map with
estimations of all 21 geometric errors for every
measurement points, using appropriate mathematical
model and sophisticated software tool.

Picture 13. The skewed grid of the actual coordinate


Workspaces partition, this way is approximated with
cells that geometrically are polyhedrons (Picture 13.).
For each knot of such grid, actual coordinates with
measured errors included are calculated.

The number of measurement points and their ranges, for


each axis separately are given in Table 1. Measurement
points are uniformly distributed, with step of 25mm. The
error estimation for some of them is obtained directly
from measuring and for some of them using interpolation.

First, correction of position dependent geometric errors


are made, using measured position independent geometric
errors:

Table 1. Measurement points

X
Y
Z

Number of
measurement points
322
142
50

Min.
(mm)
940
520
-1200

Max
(mm)
8965
4045
25

This way, the machines workspace is divided


2,217,789 3D cells, using the measurement points. In
ideal coordinate system, each cell is shaped like ideal
box. Due to the geometric errors, in the reality axes
not mutually perpendicular straight lines, and they
skewed (Picture 12.).

EXY(y) EXY(y) + SXY y

(8)

EXZ(z) EXZ(z) + SZXz

(9)

EYZ(z) EYZ(z) + SYZz

(10)

For each knot, the nominal coordinates corrections are


calculated using measured displacement errors, i.e.
appropriate actual coordinates are calculated:

to
the
3D
are
are

x + E XX ( x )
Xact = EYX ( x )

EZX ( x )

(11)

E XY ( y )
Yact = y + EYY ( y )

EZY ( y )

(12)

E XZ ( z )
= EYZ ( z )

z + EZZ ( z )

(13)

Zact

Finally, the actual coordinates of a skewed grid knot Pact,


mapped to the appropriate ideal grid knot, given with its
nominal coordinates Pnom = [x, y, z]T are determined by:
Pact = Xact + R x 1 ( x ) Yact +
+R x 1 ( x ) R y 1 ( y ) Zact

(14)

where, the matrices Rx and Ry include the rotational


errors. They are given by:

Picture 12. The skewed coordinate axes


730

VOLUMETRICCALIBRATIONFORIMPROVINGACCURACYOFAFP/ATLMACHINES

1
ECX ( x ) EBX ( x )

1
R x ( x ) = ECX ( x )
E AX ( x )

1
EBX ( x ) E AX ( x )

(15)

1
ECY ( y ) EBY ( y )

1
E AY ( y )
R Y ( y ) = ECY ( y )

1
EBY ( y ) E AY ( y )

(16)

some 3D point is inside or not for polyhedron given with


8 vertices. Details for that algorithm are given in [1]. If
some desired point is not inside the default cell, found
using bisection method, 26 neighbor cells are checked
additionally, until the proper cell in skewed grid is
determined.
After that, appropriate polynomial coefficients for that
cell are loaded. They are used to obtain linear system of 3
unknowns and its solution gives the commanded
coordinates Qcom.

The total displacement error vector E for each grid knot


could be calculated by equation:
E = Pact Pnom

OTEH2016

Entire procedure for determining the commanded


coordinates Qcom are performed in real time, including the
search for the proper cell of the skewed grid.

(17)

The equation (8)-(16) depend on the robot configuration.


Details can be found in [14] and [15].

4. RESULTS AND DISCUSSION

If some point is inside a cell, the error is approximated


expressing it as function of the coordinates and
appropriate errors in the vertices of that cell. Linear fitting
method is applied. The deviations are minimized using
least square method as optimization approach.

3D volumetric calibration algorithm is implemented in


Matlab. Large number of measurement points given in
table 1., are used to calculate and store the actual
coordinates for all knots of the machines workspace grid,
and as well the fitting polynomial coefficients for
approximation of the total 3D displacements for interior
points, for all cells.

This way, algorithm for total volumetric error


approximation is designed, for any point over entire
machine workspace, based on the measured geometric
errors in the sampled points. This approach is nonparametric approach, or black-box approach. Details for
non-parametric calibration and for error interpolation over
workspace could be found in [16].

In Matlab is also implemented the inverse calibration


procedure described in section 3. That way,
comprehensive volumetric calibration procedure is
implemented in Matlab.
Correctness of the implemented algorithm is done using
comparative analysis [1]. The results obtained by this
algorithm are compared against the results generated by
TRAC-CAL software [17]. The software solution TRACCAL is established company Etalon AGs solution for 3D
volumetric calibration and compensation.

According to Mooring et al. [16], a comprehensive


calibration algorithm includes forward calibration and
inverse calibration. Calculation of the actual coordinates
for all ideal grid knots and determining of the fitting
polynomial coefficients for estimation of the total error
for the points inside cells are carried offline. All the
obtained data are stored.

There are 79 sampled points from the machines


workspace. For all of them, displacement along each
dimension X, Y and Z is calculated, and the total vector
deviations as well. The same calculation is done by the
algorithm described in this paper and by TRAC-CAL.

Forward calibration means to find actual coordinates for


any point over the workspace, given by its nominal
coordinates. First, bisection method is used to determine
appropriate ideal grid cell where the given nominal point
lies. The fitting polynomial coefficients appropriate for
that cell are loaded and they are used for estimation of
actual coordinates of the given nominal point. Calculation
of actual coordinates is performed in real time, for any
given nominal point.

The total vector deviations of both approaches are


compared. Differences between two algorithms are
depicted on Picture 14.

In practice, for enhancing the machine accuracy, the


inverse calibration procedure is crucial. In fact, the
compensation which should be commanded to the
translational axes motors to obtained desired position is
calculated in this step.
That means, if desired coordinates Qdes are given, the
commanded coordinates Qcom should be determined. It is
especially important to be able to calculate these
compensations in real time, since a large number of points
are sent to the controller, they are given by their desired
coordinates and they should be compensated online.

Picture 14. Deviations of error estimations in sampled


points
The range of obtained differences is between 0 and 0.2
m. The mean of this sequence is 0.033m with standard
deviation of 0.057m.

If desired coordinates Qdes are given, first step is finding


the proper cell in the skewed grid, such that the point Qdes
is inside this polyhedron. The original algorithm based on
linear algebra tools is designed to determine whether

Statistical t-test is used to establish if there is statistically


731

VOLUMETRICCALIBRATIONFORIMPROVINGACCURACYOFAFP/ATLMACHINES

significant difference between mean values of the


sequences for each axis X, Y and Z, separately - the
sequence obtained by the algorithm described in this
paper and the appropriate sequence obtained by TRACCAL software. Two - tailed t-test is used 3 times for each
of the axes X, Y and Z. In all three cases, there are no
statistically significant differences in appropriate
sequences. Using 95% confidence interval, it was showed
that obtained values for both sequences have same means.

OTEH2016

[4] Zhu,S., An automated fabric layup machine for the


manufacturing of fiber reinforced polymer
composite, (Graduate Theses and Dissertation Paper 13170), Iowa State University, 2013
[5] Jordaens,A., Steensels,T. Formation of defects in flat
laminates during automatic tape laying (framework
of a masters thesis), Faculty of engineering
technology, Leuven, Belgium, 2015
[6] Jeffries,K.A. Enhanced Robotic Automated Fiber
Placement with Accurate Robot Technology and
Modular Fiber Placement Head, SAE Int. J. Aerosp.
6(2):2013, doi:10.4271/2013-01-2290, 2013
[7] Moutran,S.R., Feasible Workspace for Robotic Fiber
Placement, (master degree thesis) Virginia
Polytechnic Institute and State University,
Blacksburg, Virginia, 2002
[8] Flynn,R. Rudberg,T., Stamen,J. Automated Fiber
Placement Machine Developments: Modular Heads,
Tool Point Programming and Volumetric
Compensation, SME Technical Paper, 2011
[9] Saund,B., DeVlieg,R., "High Accuracy Articulated
Robots with CNC Control Systems" SAE Int. J.
Aerosp. 6(2):2013, doi:10.4271/2013-01-2292, 2013
[10] ISO 230-1:2012 Test code for machine tools -- Part
1: Geometric accuracy of machines operating under
no-load or quasi-static conditions, an International
Standard, by International Standards Organization,
2012
[11] ISO 230-2:2014 Test code for machine tools
Part 2: Determination of accuracy and repeatability
of positioning of numerically controlled axes, an
International Standard, by International Standards
Organization, 2014
[12] Technical report: Machine tools - numerical
compensation of geometric errors, ISO/TR
16907:2015, ISO, 2015
[13] Fan, K.C., Yang, H.W., Li, K.Y. A Design
Methodology of Volumetric Error Analysis and
Error Budget for Machine Tools, The 14th IFToMM
World Congress, Taipei, Taiwan, 2015
[14] G. Ren, J. Yang, G. Liotto, C. Wang, Theoretical
Derivations of 4 body Diagonal Displacement Errors
in 4 Machine Configurations, Proceedings of the
LAMDAMAP Conference, Cransfield, UK, 2005
[15] C. Wang, Current Issues on 3D Volumetric
Positioning Accuracy: Measurement, Compensation
and Definition, Proc. SPIE 7128, Seventh
International Symposium on Instrumentation and
Control Technology: Measurement Theory and
Systems and Aeronautical Equipment, 71281Z, 2008
[16] B.W.Mooring, Z.S.Roth, M.R.Driels, Fundamentals
of manipulator calibration, John Wiley &Sons Inc.,
1991
[17] http://www.etalon-ag.com/new-software-release/
(16.09.2016)

That means the precision this algorithm estimates total


geometric error at arbitrary point from machines
workspace, for each of the axes X, Y and Z is close to the
precision established TRAC-CAL software estimates.
That indicates the algorithm verified such way may be
used for calculating compensations needed to be
commanded to enhance position accuracy of ATL head of
this machine.
After the verification, the algorithm is implemented in
C++ and compensations are calculated in real time.

5. CONCLUSION
Description of AFP/ATL technologies and potential
defects caused by their eventual inaccuracy is given at the
beginning. That makes clear the need for volumetric
calibration procedure for AFP/ATL machines.
The volumetric calibration algorithm described in this
paper is tested on 6 DOF ATL machine, produced by
company Mikrosam. Matlab is used for implementation.
Only the three translational axes are calibrated, so only
enhancing of the ATL head position accuracy may be
achieved. The large amount of input data is used, obtained
by measurements conducted by AfM companys experts.
Commanded coordinates are calculated in real time, using
the stored data and algorithm based on described blackbox approach for non-parametric calibration. Results are
verified using comparative analysis, comparing the
obtained results against results obtained by TRAC-CAL
software.
To achieve complete accuracy of AFP/ATL machine, the
orientation of the head should be calibrated as well. Also,
the geometric errors of rotational axes should be taken
into account. In the future work, extension of this
algorithm for calibration of all axes will be considered.

References
[1] ,.,

, ( )
, ,
, 2015
[2] Khan,S., Thermal control system design for
automated fiber placement process, (master degree
thesis) Concordia University Montreal, Quebec,
Canada, 2011
[3] Ahrens,M., Mallick,V., & Parfrey,K. Robotic based
thermoplastic fibre placement process, Robotics and
Automation, 1998. Proceedings. 1998 IEEE
International Conference on (Vol. 2, pp. 1148-1153),
IEEE, 1998
732

DIAGNOSTIC APPROACH TO THE MAINTENANACE OF


MARINE SYSTEMS
DUAN CINCAR
Technical Test Center, Belgrade, dusancincar@yahoo.com

Abstract: Condition based maintenance represents a modern approach and more commonplace principle of maintenance of
technical systems.
Technical diagnostics, as an integral part of the process condition based maintenance, determine the technical condition of
the system with a certain accuracy at a given point in time.
The text gives an overview of defining a set of diagnostic measurements on ship systems with the aim of introducing,
applying and developing the concept of condition based maintenance on ships from the composition of the River Flotilla of
Serbian Armed Forces.
Technical Test Center is currently implementing the project "Diagnostic approach to the maintenance of marine systems."
Key words: Technical diagnostics, ship technical condition.

1. INTRODUCTION

The process of determining the technical condition of


ships will result and continuous training of personnel, as
persons who perform diagnostic measurements, as well as
the persons who will use the information on the technical
condition during maintenance. Usage
diagnostic
measurements require the constant monitoring of new
methods and technical resources in the field of
diagnostics, occasional purchase of instruments, which in
turn will result in the improvement of its own resources.

Timely and complete information on the actual condition of


the technical system or part of a system is essential in the
planning process as a function of command and control in
the Serbian Armed Forces. Its aim is to find the optimal
solution for the realization of the assigned missions and
tasks. Quality of information we consider timely, accurate
and sufficient information. High quality information on the
technical condition of ships will affect the functioning of
the River Flotilla (RF) as a system in the following ways
(Figure 1):

5. The introduction of the system of maintaining the


principle of condition based maintenance.
Condition based maintenance on ships as a form of
intelligent maintenance of the ship is in line with modern
trends in the field of maintenance.

1. The process of making a decision.


Information on the current own abilities is information on
key operational capabilities, ie, viewed from the side of
the technical condition of the system, as follows:

6. Identification of risks and hazards at the board.


Identification of possible failures in the formation stage
will affect positively in terms of applying the principles of
prevention, and to reduce the risk of possible injury or
impact hazards at the ship's crew

the ability to timely availability of forces,


the ability of effective use of forces,
deployment capability and mobility in the area of
operations.
2. Maintenance of the ship.
Information about the technical condition of ships will
positively affect the quality of maintenance planning, and
reducing the possible failure of ships from operational use
and reduce maintenance costs (reduction of the percentage
breakdown damage and unnecessary replacement and
repair of the correct system components).
3. More efficient use of existing resources.
Formation of information about the technical condition of
ships require hiring skilled personnel and equipment,
which are contained in the Technical Testing Center, ie,
the existing resources that are found in the system of the
Serbian Armed Forces.

Figure 1. The impact of information on the technical


condition of ships

4. Improvement of own resources.

733

OTEH2016

DIAGNOSTICAPPROACHTOTHEMAINTENANACEOFMARINESYSTEMS

Factors effectiveness, as well as basic functions of a warship are:

2. DEFINING DIAGNOSTIC
MEASUREMENTS

reliability;
certainty;

Diagnostic measurements will be carried out in accordance


with the standards:

functional benefit.

ISO 17359: Condition monitoring and dignostics of


machines-General guidelines. Flowchart of Condition
monitoring procedure are given in figure No. 2;

Features military ship can be identified with its prescribed


purpose, or tasks for which realization is intended.

ISO 13379 Condition monitoring and dignostics of


machines-General guidelines on data interpretation and
diagnostic techniques;
ISO 13380 Condition monitoring and dignostics of machinesGeneral guidelines on using perfomance parameters;

The degree of realization of the functions of the ship or the


purpose and objectives reflected in the following four
important thing features a military ship:
features military use (firepower, the fact mine, preventing,
transportation)
maritimes and maneuvering characteristics (speed, draft,
autonomy)
fighting the resistance of the ship,
availability.
By analyzing the first factor of effectiveness, then a military
characteristic points, it can be concluded that they are
critical to the function that will be given priority in the
maintenance of shipboard systems and critical points
battleship:
mobility
availability
toughness.
On the basis of certain key functions battleship made the
identification systems that provide those functions. Systems
and subsystems, and systems and equipment that provide
key features are:
The hull and superstructure of the ship: formwork and
construction;
The ship drive group: drive engine, gearboxes, propeller
shaft and a string;
Electrical Group: Diesel generators.
Division of marine system, with marked systems that
provide critical functions of the ship are given in Figure 3.

Figure 2. Condition monitoring procedure flowchart

2.1. Defining the critical ship systems on which to


run diagnostic measurements
Battleship is a specific technical system which is
characterized by high effectiveness expressed through the
concepts of certainty, reliability and maintainability.
In simple terms, the function of effectiveness reflects the
overall characteristics of a technical system and gives
answers to the following questions:
whether he can be involved in the work,
how to perform a task,

Figure 3. Division of marine system and systems that


provide critical functions

2.2. Types of diagnostic measurements


According to the selected subsystems apply the following
types of measurements:
measurements of torque and power handed over to the
propeller.

how to stay on task.


734

OTEH2016

DIAGNOSTICAPPROACHTOTHEMAINTENANACEOFMARINESYSTEMS

Measurements will be carried on all vessels, application


of strain gauges that will be posted on the ship's shaft
lines. Measuring tape will be connected via a wireless
transmitter with a measuring amplifier and a computer,
and guide the size of the angular deflection shaft. Through
angular deformation will receive the derived quantities:
torque, power and torsional oscillations. Along with
measuring the deformation of the measuring chain will be
connected and the speed encoder shaft.

(steering), lubricant analysis in accordance with ISO


14830- 1: Condition monitoring and diagnostics of
machines-tribology-based monitoring and diagnostic- Part
1 General guidelines (RIC engine, gearbox).

3. ANALYSIS OF RESULTS OF
MEASUREMENT
Analysis of measurement results represents the synthesis of
the following processes:

Measurements of vibration.
Vibration measurements are divided into three groups:
linear vibration measurement, recording the frequency
spectrum of vibration, control of roller bearings. Linear
vibration measurements will be carried out in accordance
to group standards ISO 10816: Mechanical vibrationEvaluation of machine vibration by measurements on
non-reciprocating parts, ISO 8528-9: 1995 (Reciprocating
internal combustion engine driven alternating current
generating sets - Part 9: Measurement and evaluation of
mechanical vibration, ABS (American Bureau of
Shipping), ISO 20283-2 Mechanical vibration Measurement of vibration on ships -Part 2: Measurement
of structural vibration, the regulations given by the
"Germanischer Lloyd" (German register ships) and others.
Recording frequency spectrum of vibration will be
implemented in order to detect faults closer, or in cases
where the form and frequency of the vibration can give
information on the possible cause of the fault. Control of
roller bearings will be performed by the method of
detecting shock pulse in order to determine the actual
condition of rolling bearings.

comparing the measured values with the limit values of


the respective technical regulations (standards, the
manufacturer's instructions);
collect data from the "history" of the system. Source
above the ship's technical documentation and the crew
itself.
Result of the analysis are:
Determining the size of deviations from "normal
behavior" of the system;
Definition of "trend" behavior of the system;
Determining the causes of discrepancies, inaccuracies or
failure:
In case of deviation predicting possible consequences;
Giving recommendations, or maintenance activities,
Creating a database of technical condition of ships
(technical files).
In fact, the results of the analysis are the elements of the
process condition based maintenance given ISO 17359:
Condition monitoring of machines and dignostics-General
guidelines and providing assessments of the status, forecasts
and maintenance activities.

Diagnostic measurements in the field of thermography


and temperature measurements.
Temperature measurement will be performed on all
controlled systems. Thermography measurements will be
implemented according to standard ISO 18434-1:
Condition monitoring and diagnostics of machines Thermography -Part 1: General procedures. These
measurements will be done in order periodic comparisons
with early condition, respectively measurement will be
comparative in character.

4. ACTIVITIES AFTER MEASUREMENTS


After the maintenance procedure, as a result of estimates
and prognosis set the condition of the technical system, in
cooperation with persons from logistics unit RF, analyzes
the level of reliability assessment and the condition of the
system after the intervention of maintenance.

Material testing with ultrasound.


Ultrasound measurements will be carried out in order:
Measurements of thickness of the ship's hull and
superstructure and quality inspection of welded
compaund. Cited will be implemented in accordance with
standards: EN 14127, EN10160, ISO17640, DNV, RINA.

All diagnostic data obtained from measurements with the


assessment and prediction of the condition of the technical
system, as well as feedback on maintenance procedures, will
form the technical file vessels. Data will be in time intervals
of, in principle, one year, update, and analyze with
particular attention to the trends observed between
diagnostic measurements.

Visual inspections with endoscope


Checks will be carried out mainly on RIC engines
(interior, motor housing, cylinder space..) or inside other
systems and devices which would require dismantling
substantial consumption of resources.

5. EXAMPLE DIAGNOSTIC
MEASUREMENTS ON ONE OF VESSELS
IN RF

Other diagnostic measurements.


In this group are classified as measurements that will be
made only when necessary, or in cases where the
assessment of the technical condition of the system does
not have a satisfactory level of reliability. These are the
following measurements: measurement of pressure
(compression engine, hydraulic elements of ship systems),
electrical quantities (electric motors, switchboards, cable
routing, etc.), measuring the angular displacement

After accomplished procedure of maintenance, as result


given estimates and prognosis condition of the technical
system, overview of diagnostic measurements executed on
the board of a composition RF. An example was chosen
because of the volume of measurements, a typical drive
(electric drive) and a clearly defined cause of the problem.
In preparation measurements disposed to information
735

OTEH2016

DIAGNOSTICAPPROACHTOTHEMAINTENANACEOFMARINESYSTEMS

obtained from the ship's crew of an elevated level of


vibration on the structure of the ship when driving at higher
rates (data from the "history" of the technical system).

5.2. Measurement of torsional vibration


Diagram shown in figure No. 6 presents a time record
measured values or the torque and RPM of the left propeller
shaft.

Moment (Nm)

Moment (Nm)

Broj obrtaja vratila (ob/min)

7000

500

6000

450

5000

400

4000

350

3000

300

2000

250

1000

Broj obrtaja osovine (ob/min)

Diagnostic measurement was made in the following scope:


1. Measurement of power handed over to the propeller
and propeller shaft torsional vibrations.
2. Measuring the RPM of the propeller shaft.
3. Measurement of linear vibration equipment and ship
structure.
4. Determination condition of the rolling bearings with
shock pulse method.
5. Measurement of temperature of line shaft bearings and
drive engine.
6. Recording of other operating parameters of diesel
generators and electric motors.
7. The thermographic image recording equipment and
systems.

200
10

15

20

25

30

35

40

Vreme (s)

In the following, due to the volume, provides an overview


of only the most important measurement results.

Figure 6. Time record measured values or the torque and


RPM of the left propeller shaft

5.1. Measuring forces handed propeller

Comparison of measured values and calculeted size


allowable torsional stresses are given in table 1.

In order to measure forces handed propeller (1) are affixed


strain gauges (MT) on the intermediate shaft (4) directly to
the coupling (3) with a propeller shaft (2) (disposition data
in figure 4.).

Table 1. Comparison of measured values


RPM of propeller
shaft [min-1]
240.0
266.4
278.4
280.8
295.2
307.2
326.4
333.6

Figure 4. Disposition of propeller shaft

P(kW)

Measurements were made in the run upstream, at different


speeds. The measurement results are shown in the diagram
shown in figure 5.

31.62
30.64
30.20
30.11
29.58
29.13
28.42
28.16

Linear vibration measurements were carried out in order to


estimate the level of vibration and to compare the measured
values with acceptable according to the standards set forth
in paragraph 2.2. The level of vibration is measured on the
following elements drive the complex:

250

200
Pvrl
Peml
Pemd

diesel generators;

100

electric motor;

50

0
150

Allowable stress
[N/mm2]

5.3. Measurements of linear vibrations on the


drive shipping complex

300

150

Calculeted
stress
[N/mm2]
4.23
4.32
7.08
9.02
5.27
4.41
4.08
3.93

gear unit;
200

250

300

350

bearings shaft lines.

400
n(ob/min)

Reviews vibration levels for diesel generators is shown in


figure 7.

Figure 5. Power diagram (left propeller shaft)


n RPM left propeller shaft,
Pvrl measured power on left propeller shaft,
Peml power of left drive electric motor read on the
control desk,
Pemd - power of right drive electric motor read on the
control desk.

736

OTEH2016

DIAGNOSTICAPPROACHTOTHEMAINTENANACEOFMARINESYSTEMS

5.5. Temperature measurements


Temperatures were measured on the following ship
subsystems:
diesel generator fuel system (Table 3.);
bearings shaft lines (Table 4);
electric motor bearings (Table 5).
Table 3. Measured fuel temperature
supplies the
fuel tank

Left / right diesel generator


to the entrance of the motor
return fuel
Alleged
Measured

27,7/26,4

53/52

30/33

27/27

Table 4. Measured temper. bearing shaft


340 min-1
12.30
17.40

Left prop.shaft
Prow.
Stern.
bear.
bear.
C
C
25,7
25
31
30

Right prop.shaft
Prow.
Stern
bear.
bear.
C
C
25,7
25
31
30

5. Measured temper. electric motor bearings


RPM elec.
motor
(min-1)
12.30
17.40

Figure 7. Guidelines for the evaluation of the intensity of


vibration of the diesel generator according to ISO 8528-9
Maximum measured values:
left-diesel engine (17.8 mm/s)
right diesel engine (17.3 mm/s)
left generator (14.5 mm/s)
right generator (14.8 mm/s)

1350
1350

Left / right electric motor


bearing
housing
C
58/50
85/72

C
54/50
79/59

5.4. Measurements of linear vibration on the


structure of the ship
Figure No.8 is a diagram of the measured linear vibration on
the structure of the ship which comes in ISO 20283-2, and
in it were, for easier visibility measurement results, entered
the maximum measured value on either side of the stern
structure.

5.4. Determining the condition of rolling bearings


shafts
Shock pulse measurements on roller bearings were carried
out in order to determine the level of shock pulse and
comparing the measured values with acceptable according
to the recommendations given in the technical manual
instrument with which the measurement was performed.
The measurement result is shown in table No. 2 in the
form of SPM Instruments measurement protocols.
Table 2. Condition of roller bearings

Figure 8. Measured linear vibration on the structure of


the ship which comes in ISO 20283-2
737

OTEH2016

DIAGNOSTICAPPROACHTOTHEMAINTENANACEOFMARINESYSTEMS

gearbox, according to the values given in the standard, the


transmission operation in the zone of limited operation, and
preferably their periodic monitoring during the operation of
the ship.
Based on the above mentioned facts it can be concluded
that the propulsion system critical point is left propeller
(damaged and deformed). The negative impact of the
operation of the propeller can be observed in the following
ways:

5.4. Thermographic testing devices and systems


As an example is given image aft end of the left drive motor
(position bearing motors) in figure No.9.

increased load of the left electric motor (the same rpm for
about 20% burdened from the right);
measured values of torsional vibrations of the left shaft
lines are at the level of 30% of the permitted, but at the
same time and about 3 times greater than measured during
the test run in 2012;
exceeding the temperature values of the left rolling
bearing motors prescribed by the manufacturer.
Given the results of the measurements offer docking and
repairing the ship left blades with mandatory static propeller
balancing and checking the status of beds in stern tube, or
measurement gaps in them.

Figure 9. Aft end of the left drive motor


The thermographic images will be used as a comparative
method that will track trends in temperature changes.

5.4. Analysis of the results with recommendations


for maintenance activities

6. CONCLUSION
The possibility of raising the quality of information about
their own operational capacities and better use of its own
resources, while simultaneously improving these same own
resources is the essence and goal of this approach to
maintenance.

In the analysis of the data only important observations:


The drive motors: The diagram shows the power at each
rpm left the drive motor to the significantly higher load than
the right (about 20%). This results in a slightly more
measured temperature of the housing of the left electric
motor, and the final sum and increased fuel consumption.
By measuring the forces handed propeller unambiguously
confirmed that the left electric motor is overloaded due to
the condition of the propeller. The propeller is damaged, or
all three propeller blades were deformed due to an impact
along the edges frayed, leading to uneven flow and water
flow to the propeller.

The tendency to use intelligent information systems in


maintaining the marine system will be implemented on the
basis of this project or you can collect and consolidate
methods, equipment, data and information on the actual
state of marine systems that are the basis for the design and
implementation of intelligent diagnostic information
systems.
One of the results of the project will be the form of the
results, in terms of the most appropriate form to use when
deciding on the use of ships RF.

Indirect result of increasing workload of the left electric


motors, or heating the entire engine is warming beds and an
additional electric motor. The measured values of torsional
vibrations of the left propeller shafts are at the level of 30%
of the allowable but are also about three times greater than
measured during the test run in 2012 (increase in value is
directly caused by the work of the damaged propeller).
Linear vibrations are within acceptable limits with regard to
the recommendations prescribed standard ISO 10816-3.

Using the experience based on the results of this approach


can be marked and recommend the use of diagnostic
measurements on other systems in the Serbian Armed
Forces as well as during testing of weapons and military
equipment.

References

Gearbox: Based on the measured linear vibration of the left


gear was observed values that deviate from the applicable
standards. Measured linear vibration levels indicate the right

[1] damovic,.: Technical diagnostics, Belgrade, 1998.


[2] Stojanovic,G.: The system of maintenance of technical
material means in the Navy, SND, Belgrade, 2004.

738

MAINTENANCE OF HYBRID VEHICLES


BLAA STOJANOVI
Faculty of Engineering, University of Kragujevac, blaza@kg.ac.rs
MILAN BUKVI
PhD student at Faculty of Engineering, University of Kragujevac, milanbukvic76@gmail.com
RADOMIR JANJI
Technical Test Center, Belgrade, PhD student at Faculty of Engineering, University of Kragujevac, lari32@mts.rs

Abstract: This paper at the begining presents the basic concepts of the different conceptual solutions for hybrid and
electric vehicles, primarily in terms of their transmissions. From now on, there are certain observations concerning the
reliability of these vehicles, taking into account that this area has not been explored to the extent as it is the case with
conventional vehicles (with IC engine).
With the development of modern diagnostic methods, a special place takes telediagnostic, as an area that offers huge
advantages in quality and timely diagnosis of all processes on hybrid and electric vehicles and provides excellent input
parameters to optimize the maintenance system.
Finally, a serious approach to the optimization of the maintenance system of modern hybrid and electric vehicles could
not be imagined without the combination of "soft" computing, i.e. fuzzy logic, classical reliability theory vehicles and
newly developed diagnostic methods.
This approach of the system of maintenance increases the quality of the exploitation, increases availability and reduces
the overall lifecycle costs of hybrid and electric vehicles.
Keywords: hybrid and electric vehicles, maintenance, transmission, reliability, diagnosis, fuzzy logic.
its advantages: ease of understanding, flexibility,
tolerance of imprecise data, the possibility of modeling
nonlinear functions, the ability to describe and expert
basing on natural language [2], [3].

1. INTRODUCTION
Hybrid vehicles fall into vehicles with low emission (Low
Emission Vehicles). They have been based on two
sources of energy - aggregate energy conversion
(combustion engine or fuel cell) and the aggregate
accumulation of energy produced (batteries or
ultracapacitors). Complete drive system comprises:
engine with internal combustion, electric generator,
electric motor, power converter and the battery pack [1].

In examining the reliability of hybrid vehicles, as well as


special, but also an inseparable segment, there is a
diagnostics and hybrid vehicles, especially bearing in
mind all the sophisticated diagnostic methods and new
concepts for their application, with even remote sensing
certain parameters, using the expert insight of the
technical condition of the hybrid vehicle in telediagnostic
centers [3].

There are two basic configurations of hybrid vehicles:


serial and parallel. In addition there is a serial - parallel
configuration of a hybrid vehicle, resulting from efforts to
consolidate good characteristics and serial and parallel
configurations of hybrid vehicles.

2. RELIABILITY OF HYBRID VEHICLES


The most of the scientific and popular articles and studies
based on hybrid and electric vehicles have been focused
on the control of electric drives applied and components
of the hybrid and electric vehicles.

From the very complexity of the structure of different


concepts for hybrid vehicles arises the need to consider
the reliability of these types of vehicles, both in terms of
organization and in terms of maintenance technologies.

However, very little has been written about the issue of


the overall reliability of hybrid and electric vehicles as a
transport system. This question is not trivial, and overall
acceptability and availability of these vehicles in the long
term, will significantly depend on the issue despite the
present fuel consumption and additional costs, that are not
present on conventional vehicles. In studying the same
reliability of hybrid and electric vehicles, it must be
emphasized the fact that these vehicles are not just a

A prerequisite for a quality review of the maintenance of


any technical systems, particularly hybrid vehicles as a
very complex technical system, the understanding and
study of the reliability of individual components,
aggregates and systems for hybrid vehicles, and the
vehicles themselves as a whole.
Using fuzzy logic gives a special dimension to the
consideration of the maintenance of hybrid vehicles given

739

OTEH2016

MAINTENANCEOFHYBRIDVEHICLES

combination of many types of machinery and systems for


management and control, in order to provide better fuel
economy, but there are a lot of things that must be
considered in all its complexity.

activities in a given time period. Thus, the reliability


associated with both the probability and the time span. In
addition, it is necessary to define the notion of
availability, that has been taken hypothetical system with
reliability equal to 1 (the highest possible reliability). For
such a system can be said to be "fully" available.

For a discussion of the reliability of hybrid and electric


vehicles is necessary to recall the definition of reliability
as the probability that the component, subsystem and
system functional, that is, to perform its dedicated
function at the end of a certain period of time, without any
changes or probability of performing maintenance

Studying the overall reliability can best be accomplished


display of individual operating modes using block
diagrams, just as shown in Figure 1 [4].

a) vconventional vehicle with internal combustion engine

b) hybrid vehicle with a serial drive

c) hybrid vehicle with a parallel drive


*ECU Electronic Control Unit
ICE Internal Combustion Engine
HEV Hybrid Electric Vehicles
Figure 1 - The system block diagram for a) conventional vehicle with internal combustion engine, b) hybrid vehicle
with a serial drive and c) hybrid vehicle with a parallel drive
By reviewing diagrams can come to certain conclusions.
For example, if the electric motor in a parallel hybrid
vehicle is defective, it can still operate the vehicle engine
at a specified engine and continue driving along the lower
operating performance.

electric motor. Then only has been needed to connect the


battery hybrid vehicle batteries or make correction
subsystem vehicles with internal combustion engine in
order to provide additional battery charging while driving.
Here in particular must take into account the fact that it
may not allow the level of the battery voltage falls below
the permissible value in order to prevent damage to the
battery [4], [5].

If there is a failure of internal combustion engine, hybrid


vehicle with a parallel drive can run an electric actuator,
but only as long as it enables the battery to power the
740

OTEH2016

MAINTENANCEOFHYBRIDVEHICLES

A similar analysis can be carried out for a hybrid vehicle


with standard drive, with the final analysis of the
observed outcomes can be summarized by the chart
shown in Figure 2.

the vehicle based on the monitoring of different


information on it [6].
With this in mind, we have a modern vehicle diagnostic
functions on the dashboard, where you can read the
individual parameters of diagnosis, but not provide
forecasts. The diagnostics may be on several levels. One
is at the level of the vehicle within the vehicle, and
provides information to the driver or service on what
would happen to the vehicle. The second level of the
diagnosis can be at a somewhat higher level of
maintenance when the vehicle can be broken down into
different sub-systems in the service, with the
identification of problems and the eventual replacement
of the defective part (assembly). Third level diagnostics
can be on level of car dealer or possibly a vehicle
manufacturer, to determine the reasons why part (circuit)
is not performed its function. Finally, the diagnosis can be
at the level of components or subcomponents when along
with the more microscopic analysis of components can be
determined a deficiency in the production, design, or
other defects, if any. On the second level of maintenance
is usually the replacement / repair of part or subsystem,
without necessarily finding the cause of the malfunction.
The third and last level of diagnostics can analyze the
underlying cause malfunction due to design or
manufacturing defects. Sometimes it can be due to the
failure of individual components of the structure or
inadequate materials and components, or due to misuse,
or improper use of the device by the user. In these cases,
in addition to the latter, modifications or changes to the
system or subsystem will have to make the vehicle
manufacturer [6].

Figure 2 - Diagram for determining the reliability of


vehicles from the available factors of performance,
expressed as a percentage of the individual concepts of
classical (only with internal combustion engine) and
hybrid drive vehicles (parallel and serial) [5].
The graph in Figure 2, as well as from earlier
considerations, it is clear that the overall reliability of
mass-produced hybrid vehicle is much lower than the
overall reliability of a parallel hybrid vehicle. On the
other hand reliability to serial, and parallel hybrid vehicle
in most of the observed range lower than the reliability
provided by the vehicle with conventional drive, i.e.
exclusive internal combustion engine. This is conditioned
by the fact that conventional vehicle with internal
combustion engine has fewer components and therefore
initially it has the advantage in terms of reliability [5].

Telediagnostic (remote diagnostics) is part of the


technical diagnostics, i.e. its most advanced segment,
which applies information and communication
technologies for remote monitoring of technical systems
with a certain accuracy at a given time. It deals with all
aspects of technical communication between spatially
distant technical systems and refers to the technique of
transferring data at a distance [7].

The basic idea of the design of a parallel hybrid vehicle


that is how the engine combustion, and electric drive
dimensioned with lower installed power and torque
respectively, than it takes to drive a hybrid vehicle as a
whole, because it takes the assumption that both the
maximum, so and the optimal vehicle performance
achieved in simultaneous operation and combustion
engines and electric drive. Hence when an engine
malfunctions or electric drive occurs loss of performance
of the hybrid vehicle. If the designer or the user requires
better performance of hybrid vehicles and in particular the
cancellation of one or the other of the drive system, the
design must go to a certain degree with oversize engine
combustion or electric drive.

The need to develop a model telediagnostics hybrid


vehicles were due to the current repair of vehicles have
more experience with the maintenance of hybrid vehicles
primarily drive system (batteries, electric motors, etc.).
This method of diagnosis contributed to significant
increase in the level of reliability of hybrid vehicles and
facilitate the maintenance of obtaining accurate data on
the stock of vehicles, which would be achieved by
continuous monitoring of the status of hybrid vehicles.
This would be some maintenance activities hybrid
vehicles decreased (to perform repairs only the damaged
parts of the vehicle) and able to plan ahead. Then reduce
the likelihood of accidents on the vehicle, which could
cause overheating batteries and the like. So to speak,
manufacturers of hybrid vehicles could spot the parts that
are subject to layoffs and improve their structure.
Telediagnostic would be particularly suited to vehicles
that annually realized a large number of kilometers [8].

3. DIAGNOSIS OF HYBRID VEHICLES


Each vehicle, whether conventional, hybrid or purely
electric vehicle should possess some sort of diagnostics
(system or individual components to find the cause of the
problem that has already emerged in the vehicle) and to
provide adequate forecasts (finds problems that can
possibly happen in the future), given the current state of

Model telediagnostics hybrid vehicles based on the


measurement and analysis of multiple diagnostic size,
741

OTEH2016

MAINTENANCEOFHYBRIDVEHICLES

the classification of development cancellation,

whose general concepts shown in Figure 3, would consist


of three parts:

the modeling and monitoring of degradation by


modeling reliability,

remote hybrid vehicle (whose status wants to remotely


monitor) with built-in sensors and measuring systems,

prediction of the remaining useful life of the hybrid


vehicle with a high degree of security,

communication system,
two telediagnostic centers (centralized location where
data is stored and analyzed), one would be at an
authorized service of such vehicles, and the other with
the manufacturer of hybrid vehicles.

the prediction of system failure:


the failure of which may be the street to the safety
of people and vehicles,
the failure of which could lead to failure of the
vehicle,
the failure of which may affect on the level of the
reliability,
cancellation can reduce the degree of functionality
of the vehicle,
those reliability is not enough time tested in real
conditions of exploitation,
those are extremely expensive (eg. Battery),
those service life is relatively short.

*GPRS - General Packet Radio Service


VPN - Virtual Private Network

The practical application of this model telediagnostics


would allow insight into the state of hybrid vehicles in
real-time monitoring and analysis of the results by:

Figure 3 General concept of model telediagnostics


hybrid vehicles [7]

diagnostic expert in telediagnostic authorized service


center for hybrid vehicles,

Hybrid vehicles would be connected with servers


telediagnostic centers through mobile wireless Internet.
The telediagnostics process (remote monitoring
diagnostic parameters) hybrid vehicles would consist of
several phases, such as:

diagnostic experts in tele


manufacturers of hybrid vehicles.

diagnostic

center

Thus, the practical application of this model


telediagnostics allow an increase in the level of reliability
and availability of hybrid vehicles, reducing maintenance
costs and extending their lifespan. Preventive
maintenance activities to be carried out depending on the
state of the hybrid vehicle [7].

the continuous remote monitoring (monitoring) and the


constituents of the diagnostic parameter vital vehicle
components,
the analysis of data to identify trends (trending),
the comparison of the parameters of the known or
anticipated parameters,

The benefits of these models diagnostics would have had


only the owners, but also manufacturers of hybrid
vehicles, because they could determine the causes of the
failure of batteries and their relatively short working life.
That could contribute to the improvement of production
technology batteries, which would extend their service
life, improved features and to reduce the price. This
would contribute to wider use of hybrid vehicles and thus
reduce the consumption of fossil fuels and emissions of
greenhouse gases.

the early warning of the possible occurrence of failure,


after detecting declining performance, predicting the
moment of cancellation by extrapolation,
the identification of failures and
the maintenance plan, when it is really necessary, timed
to prevent the cancellation or delay.
Telediagnostics system (remote condition monitoring)
hybrid vehicles would have to meet the following criteria:
transparency (to provide a complete picture of the state
of the vehicle and delivers timely information on the
current state of the vehicle),

4. OPTIMIZATION OF THE MAINTENANCE


SYSTEM

the openness of the (possibility of integration into other


systems, i.e. the ability to exchange information with
systems that work on other protocols) and

Maintenance of technical systems can be implemented in


several variants, according to several strategies, with
greater or lesser differences in the basic characteristics of
individual solutions. As soon as there are multiple
versions, the question is which to choose, or which is the
best. The answer to this question, in principle, is not easy,
so when choosing maintenance strategies should take
account of the following two major reasons:

the scalability (the ability to upgrade with minimal cost,


and that the functionality of the system at the same time
preserve).
Tasks of continuous telediagnosis (remote monitoring of
the situation in real time) would be:

each variant maintenance strategy causes certain


effects, some certainty, costs and other characteristics
of the system maintenance, so therefore the output
characteristics of the system for each variant must be

the detection of failures in the initial phase of


development (creation), in order to rise to their level of
confidence,
742

OTEH2016

MAINTENANCEOFHYBRIDVEHICLES

expressed clearly and quantitatively;

logic uses the experience of experts in the form of


linguistic if - then rules, a mechanism of approximate
reasoning (Figure 4) is used as a control for the specific
case. A key aspect of the application of fuzzy logic is to
develop a theory which formalizes everyday informal
opinion that it could be, as such, is used to program a
computer.

comparison of different variants of the strategy is a


multicriteria problem (certainty, cost, etc.), Who
successfully solved only if they are clearly identified all
the important requirements and constraints, and the
objective function that is observed.
Selection of the optimum defined criteria and defined
limits of the direct task of optimization. It is not enough
to determine the optimal strategy, it is necessary to
explain which criteria and what limitations it is the best
solution on the basis of which the estimate is made [9].
Optimization system maintenance can be done in different
ways. It is optimizes the model of the simplified scheme
of the process, and not the physical essence of
maintenance, as the stochastic processes. Analysis and
optimization of system maintenance can be carried out
using:
mathematical models,
empirical - heuristic model.

Figure 4 - Graphic process approximate reasoning [2]

Analysis and optimization of the maintenance system by


using the mathematical model provides the following
[10]:

Fuzzy controller is a central part of the operating hybrid


vehicles. Phase controller can be implemented using a
program that is running on a personal computer and is
connected with the process in the usual way, as in the case
of classical management. In this case, the phase controller
is used for intelligent management, so that the knowledge
of experts - the operator benefits in the management
process. Of course, when necessary, the fuzzy controller
can be incorporated in the form of the microprocessor in
the small devices [11].

provides opportunities to observe the technical system


as a whole, as an entity, and simulation and other
techniques to define the impact of variable parameters,
allows comparison of several possible variants,
facilitates the detection of links between certain
influential parameters not previously observed or that
can be set up verbal and experiential methods,

Possibilities of application of fuzzy logic are great. Some


examples of fuzzy controller on motor vehicles in Japan
and Korea, countries that are leading in practical
applications of the technology phase [11]:

indicates the information to be provided in order to


carry out the necessary analyzes,
facilitates the prediction of future conditions or events,
with the risk assessment or confidence limits.

fuzzy brakes (Nissan): manages breaks in dangerous


situations based on the speed and acceleration of the
vehicle and based on the speed and acceleration of the
wheels,

On the other hand, empirical - heuristic model provides


the following [10]:
include the factors that can not be included in the
mathematical model, which can not be analytically
unambiguously linked to other factors,

engine vehicles (NOK, Nissan): manages the fuel


injection and ignition depending on the condition of the
valve for supplying fuel flow (volume) of oxygen,
temperature of cooling water, the number of revolutions
per minute, the volume of fuel, the angle of the
crankshaft, engine vibration and pressure in suction part
of engine;

allow analysis of subjective and other factors which can


not be described analytically.
A special form of hybrid vehicle maintenance
optimization is achieved by using fuzzy logic. According
to Professor of Computer Science at the University of
California at Berkeley, Lotfi Zadeh (founder of fuzzy
logic), fuzzy logic can have two different meanings in a
broader sense, fuzzy logic is synonymous with the theory
of fuzzy sets, which refers to objects with unclear borders
whose affiliation extent certain degree, while in the
narrow sense, fuzzy logic is a logical system that is an
extension of classical logic. The essence of fuzzy logic is
very different from the so-called traditional logic [2].

7. CONCLUSION

Fuzzy logic uses the principle of incompatibility, which


means the effort to increase the vagueness of the
statement takes on its relevance. Fuzzy logic is logic that
allows multivaluable medium defined between traditional
attitudes: true - false, yes - no, black - white, etc. Fuzzy

General observations that can be deduced by examining


maintenance, diagnostics and reliability of hybrid and
electric vehicles to hybrid vehicles compared to
conventional vehicles provide much greater opportunities
for the application of advanced methods and technologies

the transmission system in the vehicle (Honda, Nissan,


Subaru): select gear depending on engine load, driving
style and road conditions;
managing the movement of vehicles (Isuzu, Nissan,
Mitsubishi): adjusts the valve status of fuel supply
based on the speed and acceleration of the vehicle.

743

OTEH2016

MAINTENANCEOFHYBRIDVEHICLES

to diagnose the condition. In addition, the reliability in


terms of providing the basic functions of the vehicle
higher in hybrid vehicles, because they have the
possibility of transferring from one species to another
drive in case of a malfunction, while the other side in
terms of ensuring full safety of all system elements and
hybrid vehicles are less reliable, primarily due to the
complexity of transmission and application of a wide
range of electronic components.

Modeling and Simulation of Electric and Hybrid


Vehicles, University of Michigan, 2009.
[4] David Wenzhong Gao, Chris Mi, Ali Emadi,
Modeling and Simulation of Electric and Hybrid
Vehicles, University of Michigan, 2009.
[5] C. Mi, M. Abul Masrur, D. Wenzhong Gao, Hybrid
Electric Vehicles Principles and Applications With
Practical Perspectives, John Wiley & Sons Ltd, The
Atrium, Southern Gate, Chichester, West Sussex,
PO19 8SQ, United Kingdom, 2011.
[6] Ili, B., Adamovi, ., Jevti, N., Automatizovana
dijagnostika ispitivanja maina u procesnoj
industriji, Nauno struni asopis Tehnika
dijagnostika, vol.11, br.3, str.11-18, Beograd, 2012.
[7] Ili B., Adamovi ., Savi B., Gruji G., Predlog
originalnog modela teledijagnostike elektrinih
automobila, Tehnika dijagnostika, Visoka tehnika
kola strukovnih studija, Beograd, 2014.
[8] Krsti, Boidar, Automatizacija procesa dijagnostike
motornih vozila, 3. konferencija Odravanje 2014,
Zenica, BiH, 11. 13.jun 2014.
[9] Stanojevic P., Mikovi V., Strategija odravanja
tehnikih sistema, Vojnotehniki glasnik, broj 6, pp.
537-554, Beograd, 2003.
[10] Stanojevic P., Mikovi V., Mogunosti i problemi
primene savremenih strategija odravanja u vojnim
sistemima, Vojnotehniki glasnik broj 2, pp.133-146,
, 2004.
[11] Subai P.: Fazi logika i neuronske mree, Tehnika
knjiga, Beograd,1997, str.201.

Due to the large number of possible architectures for


hybrid and electric vehicles, the development of the next
generation of vehicles will require increasingly advanced
and innovative simulation which models must include the
maintenance and servicing of vehicles. Whereby it must
on mind that the complexity of the model does not mean
at the same time the quality of the model, but the key
feature of the model applied in addition to its quality and
flexibility should be simplicity, which is sometimes
crucial in terms of the model user, or whoever deals with
preventive and corrective maintenance.

References
[1] K. Muta, M. Yamazaki, and J. Tokieda, Development
of new-generation hybrid system THS II - Drastic
improvement of power performance and fuel
economy, presented at the SAE World Congr.,
Detroit, MI, March 811, 2004, SAE Paper 2004-010064.
[2] Krsti, Boidar, Primena fuzzy logike pri odravanju
tehnikih sistema, Mainski fakultet, Kragujevac, 2005.
[3] David Wenzhong Gao, Chris Mi, Ali Emadi,

744

A NEW APPROACH TO CREATING AND MANAGING TECHNICAL


PUBLICATIONS FOR AIRCRAFT LASTA USING S1000D STANDARD
BRANKO DRAGI
CPS - Cad Professional Systems d.o.o., Belgrade, branko@cadpro.co.rs
VOJISLAV DEVI
Military Technical Institute, Belgrade, vojislav.devic@vti.vs.rs
MIODRAG IVANIEVI
Military Technical Institute, Belgrade, miodrag.ivanisevic@vti.vs.rs

Abstract: Modern technical publications of complex products and equipment from aerospace and defense industry (and
beyond) set wide variety of demands and conditions that need to be fulfilled for their efficient, economical and safe use.
Increasing complexity of products demand increasing complexity of documentation supporting those products. For
stated reasons, modern technical publications are developed as only one, but very significant part of the entire system
of Integrated Logistic Support (ILS) of a certain product.
Keywords: Technical publications, standardization, S1000D.
effective maintenance.

1. INTRODUCTION

Initiative for development of ILS systems was originated


in early 80s of the 20th century from the need to
coordinate larger number of aerospace and defense
industries from different countries that worked on joint
projects. It was necessary to define unified system of
standards and specifications among all participants.

Technical publication must provide to its users the ability


to get fully acquainted with complex product structure. It
also needs to provide precise information about rules of
operation and usage in sufficient depth and scope which is
necessary for adequate handling, maintenance and
monitoring of key characteristics and parameters of
complex product in operational service.

Main idea was to provide that supportability of a certain


product can be analyzed in earliest possible phase of
design and development process.

Rapid development of Information technologies enabled


both production and utilization of such technical
documentation (TD) systems that switched from old paper
form to new computer data bases. That provided
numerous improvements and new possibilities in the
whole process. The new approach in design and usage of
TD is that focus is set on providing specific piece of
information that is required by the user at that moment,
rather than going through extensive paper documentation
and finding the same information.

ILS is based on feedback relation that provides early


detection of and solution to problems related to
component maintainability, reliability and testability
through design change, revision and optimization.
Joint international effort of Aerospace and Defense
Industries Association of Europe ASD, and Aerospace
industries Association of America AIA, created set of
specifications that is now in wide use in many industries
and adopted by many countries.

2. INTEGRATED LOGISTIC SUPPORT


Integrated Logistic Support (ILS) is development of such
technical/information environment that will serve as
support to certain product throughout its entire intended
life cycle. ILS represents the connection between process
of design/development of the product, and process of
service use of the same product.
ILS System is intended to provide complete insight into
all relevant resources of a product. Applying Logistic
Support Analysis (LSA) as part of ILS, it is possible to
obtain optimized product that will cost less to develop and
produce, and that will have longer service life and more

Picture 1. ILS Overall business process, the S-series [1]


745

ANEWAPPROACHTOCREATINGANDMANAGINGTECHNICALPUBLICATIONSFORAIRCRAFTLASTAUSINGS1000DSTANDARD

The S-series of ILS specifications is named SX1000i and


it is made of following:

OTEH2016

Other benefits of using S1000D are that:


- it is based on international neutral standards
- it reduces maintenance costs for technical information
- it transforms data into configuration items
- it allows subsets of information to be generated to
meet specific user needs
- it facilitates the transfer of information and electronic
output between different systems
- many different output forms can be generated from the
same base data thus ensuring safety of data and that
every user regardless of output form is getting the
same message
- the S1000D data module concept can be applied to
legacy data
- it is non-proprietary and allows neutral delivery of
data and management of data

Table 1. ILS S-Series specifications


International specification for technical
S1000D publications using a common source database
International specification for Material
S2000M
Management Integrated Data Processing
International specification for Logistic
S3000L
Support Analysis LSA
International specification for developing and
continuously improving preventive
S4000P
maintenance
International specification for in-service data
S5000F
feedback
International specification for Training
S6000T
Analysis and design

The specification addresses two main delivery methods of


technical publications and training content packages:
- Data exchange - S1000D objects (data modules and
supporting objects) delivery for further processing
- Publishing - Delivery of publications and training
content packages - Information ready to use

3. TECHNICAL DOCUMENTATION
Operation and maintenance of highly complex products,
supporting equipment and systems from aerospace and
defense industry is closely relying on thorough technical
documentation support. Technical documentation in form
of technical publications (electronic and/or paper) must
provide user/operator with all necessary and relevant data.

3.2. S1000D Documentation process

3.1. S1000D Purpose and Scope


S1000D is defined as an international specification for the
procurement and production of technical publications. It
covers the process of planning, management, production,
exchange, distribution and use of technical documentation
that support the life cycle of any military and civil
aerospace project, and as of issue 2.0 from year 2003
including land and sea vehicles or equipment. Latest issue
is 4.1 published in year 2012.
Picture 2. Documentation process [2]

The specification adopts concepts and standards of


International Standards Organization (ISO), Continuous
Acquisition and Life-cycle Support (CALS) and World
Wide Web Consortium (W3C), in which information is
generated in a neutral XML format. This means that it can
be implemented on different and often disparate systems.
Neutrality, added to the concept of modularization, makes
the specification applicable to the wider international
community.

Prior to documentation creation in terms of authoring the


technical content into data modules, it is necessary to
perform several important customization agreements and
steps.
S1000D specification has been produced to serve for
many different types of products. Therefore, to make it
suitable for a given project or organization, some aspects
of tailoring will be required. It is common practice that
the tailored version of this specification is referred to in
the projects contractual documentation under section
"Business rules".

Information produced in accordance with S1000D is


created in a modular form, called a "data module". A data
module is defined as "the smallest self-contained
information unit within a technical publication". [1]
All data modules applicable to the Product are gathered
and managed in a Common Source Data Base (CSDB).
Key benefit of the CSDB is to enable production of
platform-independent output in either page oriented or
Interactive Electronic Technical Publication (IETP) form.

An information set is defined as required information in a


defined scope and depth in form of relevant data modules.
Info sets are managed in the project CSDB. Data Module
Requirements List (DMRL) is a list of all required data
modules for that particular project. A publication is set of
selected information published for certain customer.

Data managed in S1000D is not duplicated in the CSDB.


Data modules enable data to be stored only once and used
for multiple outputs as necessary. A single change to an
individual data module can update multiple outputs and
multiple deliveries.

Information sets are divided into three main groups:


- Common information sets
- Air specific information sets
- Land/Sea specific information

746

ANEWAPPROACHTOCREATINGANDMANAGINGTECHNICALPUBLICATIONSFORAIRCRAFTLASTAUSINGS1000DSTANDARD

Common information sets provide following data:


-

Crew/Operator information
Description and operation
Maintenance information
Wiring data
Illustrated Part Data (IPD)
Maintenance planning information
Mass and balance information
Recovery information
Equipment information
Weapon loading information
Cargo loading information
Stores loading information
Role change information
Battle damage assessment and repair information
Illustrated tool and support equipment data
Service bulletins
Material data
Common information and data
Training
List of applicable publications
Maintenance checklists and inspections

Upon defining a project specific information sets, and


creation of DMRL, data modules are ready to be authored
with technical, graphical, multimedia or even interactive
3D CAD content.

Depending on specific customer needs, selected


compilation of information in form of data modules is
published to a certain criteria in form of publication
modules. Publication modules can have output format as
IETP or paper whatever is requested by customer.

Engine Equipment
Ground Accessories
Educational and training equipment
Loading
Loading cargo on aircraft
Loading ordnance on aircraft
Loading inventory and equipment on aircraft
Integrated services of aircraft
Change of aircraft purpose
Glider repair
Non-destructive testing
Corrosion protection
Illustrated parts catalog
Battle damage repair
Rescue operations for the recovery
Storage of aircraft
Illustrated catalogs of tools and equipment
General information Checklists
The structure of standard technologies
Type system technology
Typical technology for electrical / electronic
systems
Normative documentation
Data on materials
List of materials
List of Consumable materials
Data on materials with limited shelf life
Service bulletins/Operational Information

Documentation is subject to change due to feedback


relation with products in service use. Key benefit of
S1000D CSDB modularity concept is that any revisions
on some data modules reflect through all publication
modules, thus reducing time and cost of documentation
republishing.

S1000D introduces these types of publications for


aerospace industry products:

OTEH2016

Flight crew information


Flight Manual
Aircraft operational manual
Performance characteristics
Checklists
Quick Reference
Loading and balancing the aircraft
Maintenance
Aircraft maintenance
Aircraft description
Maintenance tasks
Maintenance checklists
Technological charts
Fault isolation procedures
Operational schematics
Electrical schematics
Maintenance interval planning
Maintenance Requirements
Checklists
Technological charts
Typical technological logistics and engine
repair
Engine Maintenance
Installation of the power plant
Maintenance of equipment
On-board equipment

3.3. Data module code


Since CSDB in general is comprised of large number of
data modules, their management is made possible by
utilizing the data module code used to retrieve them or to
gain access to them in an electronic environment.
The data module code is the standardized and structured
identifier of separate data module and it is contained in its
identification section. The data module code is part of the
unique identifier of each data module.
Data module code is made of several sections of up to 41
alphanumeric characters in total.

Picture 3. Data Module Code example [2]


Partitions of data module code are following:

747

Model identification code


System difference code

ANEWAPPROACHTOCREATINGANDMANAGINGTECHNICALPUBLICATIONSFORAIRCRAFTLASTAUSINGS1000DSTANDARD

Standard numbering System


System code
Subsystem + sub-subsystem code
Unit or assembly code
Disassembly code
Disassembly code variant
Information code
Information code variant
Item location code
Learn code
Learn event code

Coding of data modules is essential to provide precise


information about what exact piece of hardware or system
is undergoing specific maintenance action, with respect to
where that action needs to take place.

OTEH2016

30 Wing tip
50 Flaps
60 Ailerons

Picture 4. CSDB structure and system decomposition

4. APPLICATION OF S1000D ON LASTA


AIRCRAFT
In 2015, company Cad Professional Systems (CPS) from
Belgrade made joint effort with Military Technical
Institute in Belgrade (VTI) to start a pilot project of the
first introduction and implementation of S1000D
specification in domestic aircraft industry. The trainer
aircraft LASTA was selected for that purpose, and parts
of wing structure were processed to obtain Illustrated
Parts Catalog (IPC).
Software packages used for this purpose are leading
industry solutions for creation and authoring of technical
documentation by S1000D specification.

Picture 5. Wings decomposed

All technical illustrations were done using PTC IsoDraw


converting existing or modeling new CAD data. Creation
and management of Common Source Data Base (CSDB)
and authoring of technical content was done using
Technical Guide Builder TG Builder 4.0 software
developed by Applied Logistics.

Illustrated Parts Data (IPD) structure was developed with


standard and obligatory part attributes defined by
specification. This attributes are item number, part
number, description and quantity per next higher level of
assembly. System also allows for numerous additional
attribute fields customized as per needs of user/customer.

According to specification, technical description and


maintenance manual [3], a Data Module Requirement List
(DMRL) has been made for the aircraft in general.
Furthermore, Standard Numbering System (SNS) was
developed with detailed decomposition for section
number 57 Wings. Data module code was adopted to
have 17 alphanumerical characters of length, with Model
Identification assigned L3 for LASTA, with system
difference code assigned A.

One of key benefits of using technical documentation in


electronic data base form is that system provides easy part
attribute search and sort functionality that can quickly
provide information to user in comparison to old paper
form documentation.

Adopting this project specific business rules provided


elements necessary for creation of working template
which is set of environment configuration parameters like
logos, dictionaries, abbreviations etc. that are used in
documentation. Based on developed working template, a
CSDB structure was created.
Standard numbering system coding of section/system 57
was further developed for following subsystems and their
sub-subsystems and units according to S1000D, issue 4.1,
chapter 8.2.5 Maintained SNS - Air vehicle, engines and
equipment [1]:

00 General

10 Center wing

Picture 6. Part attributes in bill of materials (BOM)

748

ANEWAPPROACHTOCREATINGANDMANAGINGTECHNICALPUBLICATIONSFORAIRCRAFTLASTAUSINGS1000DSTANDARD

OTEH2016

Picture 7. Interactive IPD BOMs


Parts in IPD can be searched either by attributes in a list,
or directly from zoomable interactive illustrations. Hot
point item numbers identify part attributes in a list, and
vice versa. System also supports interactive 3D models in
several formats instead of technical illustrations that can
be rotated and manipulated for easier part identification.
Picture 9. Published print ready documentation

Illustrated Part Catalogs as S1000D information sets in


general, can be produced in different levels of detail and
depth, specific to customer needs. Once formed and data
module populated SCDB, can be managed to provide
technical publications for operational, maintenance or
production level of detail all from one source. System can
provide order lists for spare parts procurement.

5. CONCLUSION
Implementation of S1000D in general brings numerous
benefits. Time and cost of documentation production and
maintenance are greatly reduced. Once produced, data
modules of certain project can be reused in unlimited
number of custom tailored publications due to the system
modularity. Technical documentation takes unified and
standardized form recognizable among different suppliers
from different origins providing mutual interoperability.

TG Builder system provides possibility to reuse technical


part of data modules written in one language, and to
translate them in any number of data modules in another
languages of choice. System can then be set to display all
data modules in selected language environment. This
significantly reduces cost and effort needed to produce
multilanguage documentation.

Specification S1000D is one part of Integrated Logistic


Support (ILS) system that significantly improves general
combat readiness and ability to track and manage
complex resources more efficiently. This project could be
introduction and beginning of development of complete
ILS system for aircraft LASTA.

Completed Illustrated Part Catalog for section 57 Wings


(Left wing) was published both in Interactive Electronic
Technical Publication (IETP) and paper form. IETP is
published in stand-alone and self-contained TG Browser
format that can be viewed on any PC platform with
Microsoft Windows operating system. Paper version is
published in ISO A4 format with standard S1000D page
elements and formatting. This formatting is unique for
any S1000D compliant document worldwide.

S1000D specification alone, provides possibility to be


implemented not only in domestic Aerospace industry
like demonstrated by LASTA project. It can also be used
for any land vehicle and artillery systems like LAZAR,
BOV, NORA or infantry weapon systems and much
more.
References
[1] ,..,
,..,
,..,
,..,
,..:

,
, , 2014.
[2] S1000D - International specification for technical
publications using a common source database, ASDAIA, Issue 4.1, 2012.
[3] Vojnotehniki Institut - LASTA Privremeni tehniki
opis i uputstvo za odravanje, VTI, Beograd, 2011.

[4] ,.., ,..:



TG Builder,
, , 2014.

Picture 8. LASTA 95 IETP


749

NEW ISSUE OF STANDARD AS/EN 9100:2016, EXPECTATION AND


BENEFITS FOR CUSTOMERS
BILJANA MARKOVI
Mechanical faculty, University in East Sarajevo ORAO a.d., for production and overhaul, Bijeljina, Republic of Srpska,
biljana46m@gmail.com

Abstract: This paper represents and explains new elements in revised issue of AS/EN 9100:2016 standard, in aviation,
space and defence (AS&D) industry. Main subject of this paper is objectives of revised standard, reasons for revision,
revision timeline, new clause structure, key changes, revision activity and benefits and application of this new standard,
regarding to customer needs, in the end.. This elements are the most important for all companies which need to perform
transition of their QMS, from previous issue AS/EN 9100:2009, to nee one, before the September, 2018.
Keywords: requirements, new standard, EN/AS 9100:2016, benefits.

1. INTRODUCTION
Successful aviation space and defence businesses understand
the value of an effective Quality Management System. It
helps them continually improve, focus on meeting customer
requirements, and ensure customer satisfaction.
The International Aerospace Quality Group (IAQG), who
operate the AS/EN 9100 series of quality management
standards have decided to continue to base the series on
ISO 9001 with some additional enhancements. All ISO
management system standards are subject to a regular
review under the rules by which they are written.
Following a substantial user survey the ISO 9001
committee decided that a review was appropriate to
maintain its relevance in todays market place.

The AS/EN 9100 series International Aviation, Space and


Defense Quality Model has approximately 105 additional
requirements beyond ISO 9001, including (picture no.1.):
Configuration Management,
Risk Management,
Special Requirements,
Critical Items,
On Time Delivery,
Project Management,
Supplier Scope of Approval.
Objective of AS/EN 9100 standard are:
Establish commonality of aviation, space and defense
quality systems, as documented and as applied,

ISO 9001 is the worlds most recognized management


system standard and it is used by over a million
organizations across the world. The new version has been
written to maintain its relevance in todays marketplace
and to continue to offer organizations improved
performance and business benefits.

Establish and implement a process of continual


improvement to bring initiatives to life,
Establish methods to share best practices in the aviation,
space and defense industry,
Coordinate initiatives and activities with
regulatory/government agencies and other;

The revised AS/EN 9100 series of standards build on this


revision to add clarity and enhance ease of use while
addressing industry and stakeholder needs.
AS/EN 9100 Aerospace Management Systems is a widely
adopted and standardized quality management system for
the aerospace industry. It was introduced in October 1999
by the Society of Automotive Engineers in the Americas
and the European Association of Aerospace Industries in
Europe. The International Aerospace Quality Group
(IAQG) developed the AS9100 document.
AS/EN 9100 encompasses ISO 9001, with additional
requirements for quality and safety relevant to aerospace,
and defines the quality management systems standard for
the industry. All major aerospace manufacturers (original
equipment manufacturers OEMs) and suppliers
worldwide endorse or require certification to AS/EN 9100
as a condition of doing business with them.

Picture 1. What is AS/EN 9100 standard? [2]

750

OTEH2016

NEWISSUEOFSTANDARDAS/EN9100:2016,EXPECTATIONANDBENEFITS

Why does AS&D have their own standards? Many


reasons for that, some of them are following:

stakeholder needs.

High risk products,

2.1. Areas of focus changes

High cost products,

Tightly controlled industry requirements (Statutory,


Regulatory, Customer),

Safety is a must,

Quality s required,
Failure is not an option, [2].

The revision of AS/EN 9100:2016 is now at the Final


draft stage and it is on target for release in the October of
2016.

1.1. Key Benefits of introducing AS/EN 9100


requirements

The aircraft and aerospace industries have embraced


AS9100 as a critical tool for improving quality and ontime delivery within their supply chains. Most of the
major aircraft engine manufacturers require AS9100
certification for their suppliers.
Benefits of certification to AS9100 global industry
standards include:
A qualification
manufacturers,

to

supply

major

aerospace

Easy integration into existing quality management


systems as the AS9100 standard is based on ISO 9001,
with additional, industry critical criteria,
Access to the best practices of the aerospace industry for
quality and traceability to help reduce operational risk,
Enhanced marketability of products and services
through third-party proof of company commitment to
deliver high-quality products and services,

Product Safety added in a separate clause and in


selected areas;
Counterfeit Parts Prevention added in a separate
clause and in selected areas ;
Risk merged current 9100 requirements with the
new ISO requirements and emphasis on risks in
operational processes ;
Configuration Management clarified and improved
to address stakeholder needs;
Awareness reinforced requirements for awareness
of individual contribution to quality;
Human Factors included as a consideration in
nonconformity / corrective action;
Configuration Management clarified and improved
to address stakeholder needs;
Product Realization and Planning clarified and
enhanced planning throughout the standard ;
Post-Delivery Support merged current 9100
requirements with the new ISO requirements;
Project Management combined with Operation
Planning to address user interpretation issues;
Design Development and Supplier Management
Gap analysis - ISO text has been added back in a
few places to meet the IAQG needs;
Quality Manual Note added pointing to the
requirements that make up a quality manual or the
equivalent ;
Management Representative Requirement added
back in for Management Representative QMS
oversight.

3. ISO 9001: 2015 AND AS/EN 9100: 2016


REVISION ACTIVITY

A focus on customer satisfaction: performance


objectives must be aligned to customer expectations

Why revise ISO 9001? Many reasons exist regarding to


revision of ISO 9001:2015 requirements. The most
important are:

Access to global markets through internationally


recognized certification [1].

Adapt to a changing world,

2. REASONS FOR CHANGE

Enhance an organization's ability to satisfy its


customers,

All standards go through a regular update to bring them in


line with industry changes and developments in
technology. The 9100 series (AS9110, AS9120 and
AS9115) is being updated to:

Provide a consistent foundation for the future,


Reflect the increasingly complex environments in
which organizations operate,
Ensure the new standard reflects the needs of all
interested parties,

Incorporate changes to the ISO 9001:2015;


Consider aviation, space and defense stakeholders
needs (Web survey performed in 2013);

Integrate with other management systems.


In the picture 2 has explained plan-do-check-act manner
for performing QMS in an AS&D company.

Incorporate clarifications to 9100 series requested by


IAQG users since the last revision.
The upcoming revision will focus on adding clarity and
enhancing ease of use, while addressing industry and

751

OTEH2016

NEWISSUEOFSTANDARDAS/EN9100:2016,EXPECTATIONANDBENEFITS

Picture 2. QMS structure [2]


Key changes regarding to ISO 9001:2015 are:

this is now expressed as a requirement to retain


documented information. Requirements to maintain
documented information are detailed throughout the
standard and some examples are given.

Greater emphases on process,


Alignment with strategic direction of company
Integration of the QMS into organizations business
process,
Determining risk and opportunities,
Emphasis of change management,
Introduction of knowledge management,
Increased performance estimation,
Improvement expanded -clause.

Picture 4. Documented information [3]

3.1. AS/EN 9100 series transition timeline

Picture 3. 9100 series changes - high level summary [2]


The next fact need to be mentioned here: as part of the
alignment with other management system standards a
common clause on Documented Information has been
adopted. The terms documented procedure and record
have both been replaced throughout the requirements text
by documented information.

Picture 5. 9100 series transition guidance [3]

Where previous versions would have referred to


documented procedures (e.g. to define, control or support
a process) this is now expressed as a requirement to
maintain documented information.

Transition is an opportunity
What does an company need to do in the way which need
to be established for transition form previous standard
issue to new one?

Where previous versions would have referred to records


752

OTEH2016

NEWISSUEOFSTANDARDAS/EN9100:2016,EXPECTATIONANDBENEFITS

1.

Take a completely fresh look at their QMS;

2.

Attend suite of transition training courses to


understand the differences in more detail;

3.

Highlight the key changes as an opportunity for


improvement;

4.

Make changes to documentation to reflect the new


structure (as necessary);

5.

Implement new requirements on leadership, risk and


context of organization;

6.

Review effectiveness of current control set;

7.

Assume every control may have changed;

8.

Carry out an impact assessment;

Bring quality and continual improvement into the heart


of the organization
Increase involvement of the leadership team
Introduce risk and opportunity management
They will be much less prescriptive than the previous
versions and can be used as more agile business
improvement tools. This means that you can make the
new standards relevant to the requirements of your
organization to gain sustainable business improvements.
One of the major changes to the AS/EN 9100 series is that
it brings quality management and continual improvement
into the heart of an organization. This means that the new
standard is an opportunity for organizations to align their
strategic direction with their quality management system.
The starting point of the new version of the standard is to
identify internal and external parties and issues which
affect the QMS. This means that it can be used to help
enhance and monitor the performance of an organization,
based on a higher level strategic view.

4. BENEFITS AND APPLICATION


When an company implemented and managed new issue
of AS/EN 9100:2016 requirements, can improve:
produce and continually improve safe and reliable
product,

Our customers tell us they get multiple benefits as a result


of implementing and adopting a system that meets the
requirements of the AS/EN 9100 series. The new versions
will continue to do this and provide additional value.

meet or exceed customer and regulatory requirements


to ensure satisfaction,
processes necessary to conduct day-to- day business are
defined ans managed,

The new AS/EN 9100 series of standards will:


Facilitate continual improvement: Regular assessment
will ensure you continually use, monitor and improve
your processes,

documentation accurately reflect the work to be


performed and action to be taken,
focus on the complete supply chain and stakeholders,

Increase market opportunities so you can demonstrate


to clients excellent levels of traceability throughout the
supply chain,

fewer customer unique documents,


recognized by regulatory authorities.

Increase efficiency that will save a time, money and


resources,

Also, implementing this standard requirements can


improve next organizational elements, which are basic for
QMS management:

Ensure compliance with a system supported by regulatory


authorities that helps to mitigate company risks,

Establish commonality of aviation, space and defence


(AS&D) QMS requirements,

Motivate, engage and involve staff with more efficient


internal processes,

Takes into account new requirements form AS&D and


other QMS standards,

Help company trade as its often a requirement of the


aerospace industry that company have implemented a
QMS. It also independently demonstrates that company
have in operation a management system accepted by the
aerospace sector.

Incorporates stakeholders feedback,


provide a common baseline with ISO 9001:2015 which
benefits:

Supplier with dual certification requirements,

Sub-tier suppliers who only neesd ISO 9001,

Commonality enhances both auditor flexibility


and reduced training.

5. CONCLUSION
Regarding to all previous mentioned parts of this paper,
the most important benefit of the new version AS/EN
9100:2016 are:

All IAQG AS&D companies are certified to a version


of 9100, 9110 or 9120,

Less prescriptive, but with greater focus on achieving


conforming products and services,

All AIQG and Sector member companies flow down


9100, 9110 or 9120 to their supply chain.

More user friendly for service and knowledge-based


organizations ,

Note: Supply chain flow down of 9100 is based on


eligibility criteria and the organization may allow
deviations as applicable.

Greater leadership engagement ,


More structured planning for setting objectives,

With the 2016 versions of AS/EN 9100 company will be


able to:
Introduce an integrated approach
management system standards

with

Management review is aligned to organizational results,


The opportunity
information,

other

753

for

more

flexible

documented

NEWISSUEOFSTANDARDAS/EN9100:2016,EXPECTATIONANDBENEFITS

Addresses organizational risks and opportunities in a


structured manner ,

OTEH2016
[3] Moving to the AS/EN 9100:2016 Series Transition
Guide, BSI, presentation, 2015.
[4] Markovi,B.: Application of international standard
EN 9100:2009 requirements in aircraft industry
Proceedings of COMET-a 2012, 1st International
Scientific Conference, Conference on Mechanical
Engineering Technologies and Applications,
Jahorina 28th 30th November 2012, Republic of
Srpska, 549-556.
[5] Markovi,B.: Aircraft and defense industry got to
amended international standards, 4th International
Scientific Conference, OTEH 2011 on Defensive
technologies, Organized by Military Technical
Institute, Belgrade 6-7 October 2011.
[6] Markovi,B.: Next version of standard ISO
9001:2015 - reflection on aircraft company QMS,
Proceedings of COMET-a 2014, 1st International
Scientific Conference, Conference on Mechanical
Engineering Technologies and Applications,
Jahorina 2 5 December 2014, Republic of Srpska,
613-620.

Addresses supply chain management more effectively,


Opportunity for an integrated management system that
addresses other elements such as environment, health &
safety, business continuity, etc.
Business needs and expectations have changed
significantly since the last major revision of ISO 9001 in
the year 2000. Examples of these changes are ever more
demanding customers, the emergence of new
technologies, increasingly more complex supply chains
and a much greater awareness of the need for sustainable
development initiatives. This conclusion has been base
reason for decision established regarding to new issue of
AS/EN 9100:2016 standard.

References
[1] NSF AS 9100:2016, Transition Guide presentation.
2015.
[2] IAQG 9100 series 2016 Revision Overview
presentation, October 2015.

754

You might also like