!! 1000th Blog Post !!

Friday, 27 March, 2015

This post marks the 1000th entry for the Spaced-OoooO-Out blog. My first post was on 12 September 2005 outlining a talk I gave at the Society of Cartographers annual meeting in Cambridge on the Journal of Maps. I then went on to outline the current debates between open data and the walled garden of the Ordnance Survey - speaking at the conference were (then CTO at OS) Ed Parsons and (then OpenGeodata) Jo Walsh. O have times have changed with Ed jumping to Google and OS opening up much of their data (and continuing to do so).

I never intended to spend 10 years blogging and much less write 1000 blog posts. That’s averaged out at 100 per year or about 2 a week. I’m not the most prolific poster but it has been consistent in terms of delivery, style and content…. very much focused on my teaching and research in GIS, remote sensing and geomorphology, with heavy smatterings of general IT around this.

Blogs have been around in various guises since people could leave public readable messages on the internet, however the evolution from regularly updated static web pages to bespoke server platforms designed for blogging didn’t really happen until the late 1990s when popularity started to spread. By 2004/5 blogging had hit the mainstream along with the rise in a web 2.0 technologies of which this was definitely a part.

So why blog? Well from a personal perspective I was a relatively new lecturer at Kingston University at the time, blogging was very much an area of “buzz” and I wanted to increase some profile. However there is more to it… I really like the comments Donald Clark made about the (lack of) use of blogging in education. Culled from both his list, as well as further thoughts of my own, these in particular struck a chord with me when I started:

(1) Get Better at Writing: to improve at writing you need to practise and what better way than to do something practical and useful. Blogging has helped me improve the quality of my writing which has fed directly back in to my academic work.

(2) Organising Thoughts: any kind of writing forces you to organise your thoughts in to meaningful content.

(3) Improve Understanding: writing strongly reinforces my understanding by forcing me to (re)think about topics.

(4) Sharing: opportunity to share useful (and not so useful!) information with readers.

(5) Debate: whilst I dont get too many comments, it allows at least a 1-way, and sometimes 2-way, conversation to develop.

(6) Notes to Self: my blog is an invaluable self-published repository of information. There are some things which small snippets, but just so darn useful. Where do I store such information? Well I could put it in to my GtD archive or blog it, share it and make it easy to find in the future!

(7) Indexed: what I write about is crawled and indexed by search engines and much of it can easily be found. For example, I find it amazing that my blog entry on the NERC FSF is on the first page of Google hits for it!

(8) READING blogs saves time: OK, this isn’t about writing, but reading, however I follow 57 blogs at the moment using the The Old Reader (and GReader on Android) as my feed aggregator of choice. It’s an invaluable time saver for keeping up with important snippets of information.

Whilst I am a BIG fan of blogging, micro-blogging (aka Twitter) is just not for me for two reasons:

(1) Vast amounts of drivel: I am NOT interested in the minutiae of a persons day. I want “signal” above “noise” in life and you drown in noise on Twitter, There is undoubtedly signal but finding it can be difficult.

(2) Time: it is considerably more time consuming to stay on top of the constant drip of information…. don’t get me started about Facebook!

As a footnote to that… Twitter has its place and I do use it occasionally. Very occasionally….

Finally a technical note - I have always used the uber-cool Blosxom blogging engine which runs as a CGI script from my own server with all posts stored as text files. Its ultra reliable and portable, which is often not the case for more complex database driven sites.

Credit card for travel?

Friday, 27 March, 2015

I’m off to the 6th Argentine Congress on Quaternary Geomorphology (or the rather handy Google Translate version!) shortly so have been prepping various academic and travel things ready for the trip. One thing I stumbled across which might be useful to other (UK) travelers is paying abroad - credit cards are obviously dead handy in this regard but usually charge a foreign transaction fee. Not so the Halifax Clarity Credit Card which is free on foreign transaction and, indeed, free on cash withdrawals. If you pay off your card monthly then this is a great deal.

QGIS Primer

Thursday, 26 March, 2015

Although a little dated (it was produced using v2.0 and we are currently on v.2.81) Lex berman’s QGIS Workshop is a very easy to access and use intro and primer to QGIS, with a smattering of useful links. Worth looking through for hints and tips.

International Data Rescue Award in the Geosciences

Wednesday, 25 March, 2015

I wanted to highlight the 2015 International Data Rescue Award in the Geosciences which is run by and IEDA and Elsevier. As they say on the site, IDRA was created to to raise awareness of the importance of securing access to science’s older research data, particularly those with poor preservation outlook or fragile storage conditions, and to urge efforts towards creating robust electronic datasets that can be shared globally.

This is something I have long had an interest in, going back to terrain modelling I undertook for my MSc and MSc degrees. In particular it was a focus of my PhD where I looked at a range of published and unpublished materials on the former Irish ice sheet. Some time after my PhD (!) I realised there was a dataset of striae observations of considerable size and this led the the compilation, mapping and publication along with subsequent interpretation of the data. This then formed one of the examples used in my recent paper on data rescue in geomorphology.

It’s worth looking at the introductory section to the GeoResJ paper (see below) as it covers some more general ground about what we consider to be data rescue (and something I also blogged on)… I’m not going to repeat it here, but it’s salient to note that it’s anything we lose “access” to. For example I blogged about try to make PDFs of my MSc Thesis available and how, in the space of 20 years, this particular file format is near obsolete (but not quite unreadable). Flipping this on it’s head, what formats should we storing data in? Within the context of spatial data, I blogged about this a little while ago and much of this remains pertinent today. Indeed, the topic of preservation is so important that research council projects need to have a data deposition plan - however this is often file format agnostic and really a well conceived plan should take this in to consideration as well. At Wageningen University, all research students need to come up with a data management plan as part of their research - an important element.

The take away… if nothing else consider how you might use data collected as part of your research in the future and that is both in the physical media it is stored on and the format it is stored in.

<

iframe id=”viewer” src = “/Viewer.js/#../blosxom/documents/2015_Smith_Legacy_data_GeoResJ.pdf” width=’556’ height=’800’ allowfullscreen webkitallowfullscreen>

OS Open Data

Tuesday, 24 March, 2015

OS recently provided a recent update on their OpenData products as a reminder of what is available and some of the new products. Indeed, take a look at their main OpenData page, the products page and the download page. There are some really good products here including Meridian (medium scale vector), Terrain 50 (medium scale DEM), CodePoint (postcodes), BoundaryLine (administrative boundaries) and a range of raster products. Very good for a range of mapping projects and all using the very flexible Open Government License. Enjoy!

Note to self… Burning a DVD

Monday, 23 March, 2015

Part of a note to self…. I wanted to burn a DVD of an mp4 i had downloaded and started looking around for an easy and quick way to do this. And you’d have thought it would be simples…. but no! Which surprises me because all you need to do is transcode the video in to mp2, create the DVD file directory structure and then burn to the disc. All things for which there is open source software. So after some false starts with Infracorder, cdrtfe and ImgBurn and, after a little bit of DuckDuckGoing, I ended up coming back to Windows DVD Maker which… errrrr… didn’t quite work!! A couple of gotchas….

1. It doesn’t work with mp4…. so I quickly loaded TEncoder and converted it using the DVD_Player_avi settings, but changed the audio codec to Wmav2

2. When I burnt the disc - there was no audio!! A quick DuckDuckGo later and this page was useful. In short, try using WMA audio instead of AC3 (hence the point above) and then TURN OFF any filters. To do this click on “Options” (bottom right of DVD Maker screen) and go to the Compatibility tab and untick the “AVI Decompressor”.

Once I had done this things worked perfectly. As with much in the Microsoft (and Apple!) world, if you do it their way it works well.

World’s most detailed wide field space photo…..

Saturday, 21 March, 2015

…. has now been created! This was a great project over at the BBCs StarGazing Live 2015 getting the public to submit photos of Orion to create it. The clever bit is combining the photos together which, from the description, looks to use image matching algorithms that my PhD student James O’Connor is utilising in his research. It first matches the image to a known constellation to calculate the area of the sky it covers - if appropriate its accepted for processing, along with every other image of the same region. With the end of submissions, these are then all matched against one another, overlaid and combined together. This is, again, an image matching process although I’d be interested to know what they did for the combination.

A great example of remote sensing, citizen science and the way image processing cross-fertilises across disciplines.

QGIS: great features 1

Saturday, 21 March, 2015

Thought I’d kick off an occasional series of blog posts highlighting nice features in QGIS….. this is my go-to app for working with spatial data as it’s fast and reliable.

I’m currently completing work with my colleague Niels Anders on manipulating some digitised vector data. This works in Python and produces shapefile outputs - so QGIS is being used to view the data, manipulate the attribute table and symbolise some of the outputs for map production.

One of the processes I have to do is extract a sub-set of vector data in a shapefile and save it to a new one… so great feature number 1 is the “Paste features as” menu item. As the screenshot below shows I can automatically paste features to a new vector layer with the same projection as the QGIS project. Very handy and…. just makes life easier!

Eclipse Maps

Friday, 20 March, 2015

Feels appropriate, given the eclipse across northern Europe (very nearly now!) to post about the topic….. and point people to the very good Eclipse Maps (Esri employee by day, eclipse fanatic by night). Below is an example from a couple of years ago - really nicely produced and very clear. Worldwide eclipses 2001-2020 are shown here…. the gallery has a good range of maps both historical and predicted.

Monochrome Cameras

Thursday, 19 March, 2015

It’s a strange situation now - in the past monochrome (or B&W or panchromatic) photos were the standard images to be produced. You had a choice of…. B&W film and that was it!! B&W remained the mainstay of (particularly professional) photography right the way through to the 1970s. Whilst the idea for colour projection (and photography) dates back to James Maxwell in 1855, it wasn’t until the launch of Kodachrome in 1935 that there was a viable commercial product available… at a price. The 1970s was when colour finally decreased to “consumer” prices.

Since the 1980s we have had the rise of digital which works completely differently. Whilst film has 3 layers, each sensitive to different wavelengths of light, a digital sensor is inherently monochromatic….. it only records light (a greyscale value) incident upon the sensor. On top of the sensor sits a colour filter array (CFA) which filters either red, green or blue light. The arrangement of filters in the array is critical and typically a Bayer arrangement is used. This has 50% green, 25% red and 25% blue filters meaning that the sensor records a matrix (or patchwork) of values for different wavelengths of light - this single layer is then demosaiced in to three new layers representing red, green and blue light.

The obvious point is that, if you only want a monochrome image, what can you do? The digital sensor is inherently recording in a single layer. The sub-sampling employed by using the CFA requires interpolation to a colour image which, if you produce a B&W, means you then convert back to a single layer! Crazy!! One solution is to buy a monochrome camera - yes, at least one manfacturer now makes a B&W camera and thats the Leica M Monochrom. Nice camera, but a little pricey at £4,500. One alternative is to remove the Bayer array (debayer) from an existing camera - a few people appear to have done this but there are (as far as Im aware) no commercial services as it’s a risky business. The array is bonded to the sensor and you need to scrape it off, but clearly people have successfully completed the task.

Besides shooting in monochrome, what are the advantages? Well with no de-mosaicing process to go through the images should be sharper, a fact critical for photogrammetry where colour is less important. I think we’ll see more of this over the next few years.

Treasure the voice

Thursday, 19 March, 2015

A really powerful talk from Julian Treasure at Ted. He starts with the seven deadly sins of speaking - why aren’t people listening to you - before moving on to the four powerful cornerstones of speaking. These four (HAIL: honesty, authenticity, integrity, love)any speaker (every academic speaker) should take to heart. He then moves on to look at HOW to use the voice. These are really really useful: register (lower, upper and middle), timbre (smooth, rich, warm…. how it feels), prosody (variations in tone, rather than monotonic), pace (speeding up or slowing down), pitch, volume and……… silence (sodcasting was a great catch!). Finally he covers how to warm up prior to giving a speech - important advice if you want to make an impact, make the most, of your communication.

Voice is often ignored - don’t, its a critical part of communication.

Remote sensing in the buget

Wednesday, 18 March, 2015

Yup, its true. Perhaps the biggest surprise in the budget was the announcement that Pitcairn would be the focus of the world’s largest marine reserve at 834,000 sq km, more than twice as big as the UK, with a view to preventing illegal fishing. This takes operational “Project Eyes” a ship tracking command centre in Harwell partnered by the Pew Charitable Trusts and UK Satellite Applications Catapult. Some nice detail on the virtual watchroom at Pew and the BBC. This uses what is now fairly standard synthetic aperture radar to track shipping, but can ingest multiple data sources and then analyse the data for suspicious activity. High resolution optical satellites can then be targeted if need be. Great stuff!!

Andrew Stanton: Clues to a Great Story

Wednesday, 18 March, 2015

I came across Andrew Stanton’s (of Pixar fame) TED talk ages ago but have only just got around to watching it. Any great presenter needs to take on board the elements of story telling and here Andrew provides a masterclass in both presentation structure, design, delivery and making us care….. can’t overstate enough how important these elements are. A MUST WATCH! (and the hook is brilliant!)

OPEN ACCESS EPRINT: Use of legacy data in geomorphological research

Wednesday, 18 March, 2015

Smith, M.J., Keesstra, S. and Rose, J. (2015)
GeoResJ


This paper considers legacy data and data rescue within the context of geomorphology. Data rescue may be necessary dependent upon the storage medium (is it physically accessible) and the data format (e.g. digital file type); where either of these is not functional, intervention will be required in order to retrieve the stored data. Within geomorphological research, there are three scenarios that may utilize legacy data: to reinvestigate phenomena, to access information about a landform/process that no longer exists, and to investigate temporal change. Here, we present three case studies with discussion that illustrate these scenarios: striae records of Ireland were used to produce a palaeoglacial reconstruction, geomorphological mapping was used to compile a map of glacial landforms, and aerial photographs were used to analyze temporal change in river channel form and catchment land cover.

Health and Safety Horrors

Tuesday, 17 March, 2015

I was recently doing a health and safety talk to our second year students prior to a geomorphology field trip - nothing overly risky as it was essentially walking footpaths on parts of the UK coastline, but all this trips need to undergo a risk assessment and trip briefing. It’s important to cover what can be quite dry material in terms of trip regulations so I was looking for some inspiration to drive home the point the seemingly mundane situations can lead to… dangerous activities!! Compliance training specialists Highfield have put together a great “House of Horrors” - my particular favourite was the window cleaner!

Getting things done…. update

Tuesday, 17 March, 2015

Way back in 2008 I blogged about Getting Things Done…. David Allen’s productivity system for busy people. Well, since I wrote that Palm have virtually disappeared off the face of the planet having sunk all their money in to developing the rather excellent Palm Pre running webOS which was subsequently binned, open sourced and licensed to LG!

Anyway, that orphaned my get-go app (NoteStudio) on my dying Palm TX which was subsequently replaced by my Orange San Fran. My eventual solution to this was to run StyleTap on my Moto X but the interface now feels clunky and old - in fact because I had continued to use NoteStudio on my PC, I needed to sync the data to the StyleTap virtual machine. The HotSync connector for Palm Desktop doesn’t allow a “virtual” sync to then manually transfer the files to StyleTap…. so my brother gave me his old (dust gathered) Palm TX, I then synced my data to his device, copied that files off on to the SD card and thence in to StyleTap. This worked great, although I couldn’t then use the desktop NoteStudio.

To cut to the chase, I therefore wanted an Android app that supported the GTD methodology, supported data syncing and ideally had a web interface. Enter the slightly daft named ToodleDo which appears to do all that and more. There is a native Android app which is intuitive to use and main task-based panel which allows you (all importantly) to generate sub-tasks and so track progress to outcomes. You can tag tasks within folders to see their context. In addition it supports the taking of notes and hierachical outlines which are both useful additions.

This supports my task outlining and tracking, however for rapidly writing “To Do” lists ready for creating tasks, nothing beats Google Keep (I know, I try to be Google agnostic!) which very graphically and easily allows you to create lasts, add images, dictate etc, all synced to a web interface as well. A great way to keep track of ideas as you generate them.

How to win friends and use GIS

Tuesday, 17 March, 2015

For those newbies to QGIS, a good primer….

Dice Player

Sunday, 15 March, 2015

I often download and later watch talks from TED, as well as a range of podcasts. Good presenters often talk quite slowly in order to improve clarity, particularly for live performances. However when playing these back they don’t need to be this slow as you can listen at a much faster speed…… and if you dont catch it first time you can always replay it. Take a look at Donald Clark’s blog for some pointers on the power of audio….

As a result I looked for an Android media player that can increase playback speed - and Dice Player is the one I have currently settled on. One problem of speeding up audio is that the pitch changes (see Wikipedia) which means a player needs to correct for this. Dice Player does a great job and I find I can speed up TED talks by 1.3x, a great time saver!

Android App: Maps.Me

Friday, 13 March, 2015

I’ve been a long-time user of Mapdroyd, part of my mapping requirement to have offline maps for travel. However since being subsumed by Cloudmade (who seem to be focusing upon in-car navigation) all development has ceased. This is a shame as whilst the interface was a bit clunky, the file format for the maps was very space efficient. This came to a head when upgrading my Nexus 7 to Android Lollipop which broke it - it appears that as it as it tries to start rendering map data on top of the world base map part of the graphical engine fails and it crashes.

So the hunt was on for a replacement….. I have colleagues who use OSMand, however I’ve never really got on with this and think the application is slow and interface horrible. I’ve subsequently come across maps.me which I can only say is delightful to use. The interface is simple and clear, the search good and the maps clear and easy to read, whilst rendering fairly quickly. The maps also seem to be updated with OSM data relatively regularly. And of course, the maps are offline. It also has an intuitive bookmarking system for storing places and can load in KML files as well. There is a routing version, but this isn’t something I have used - it requires different map files which additionally store the routing information.

Two minor complaints;

1. It sends usage statistics back to the publisher - this should be opt-in and a little daft if you are travelling. I installed the Android firewall DroidWall and simply block all outgoing comms.

2. The map files are stored on internal memory. Not very good if you are memory challenged (the UK is 500Mb)!! On my rooted device I use Folder Mount to point all directory requests to a folder on my microSD card.

Bukard 1992: Determination of the mean scarp thickness d0 to compute flow avalanches (TRANSLATION)

Tuesday, 10 March, 2015

[NOTE: way back during the mists of my MSc whilst research the runout of high frequency avalanches in British Columbia, Canada, I asked a fellow student (Matthias Jacob) to translate a paper written in German from the Swiss Federal Institute for Snow and Avalanche Research. He very kindly did so and I was recently reminded that this was sitting in one of my files. So I thought the world might (slightly!) benefit from this translation. PDF of original below as well]



Determination of the mean scarp thickness d0 to compute flow avalanches



1. Introduction

The degree of avalanche hazard is determined by the pressure impact (300 kN/m2) and return interval (RI; up to 300 years) in populated areas. In some avalanche chutes, avalanches with large volumes are more rare than small volumes. Using a known RI the major can then be estimated. Avalanche technical calculations lead from the magnitude to avalanche pressure. The pressure of a certain point is therefore correlate with the RI. Considering the rare and thus important precipitation events - which were usually not observed - pressures and frequencies (F) have to be calculated. In the institute the deterministically statistically Voelling-Sahn model is applied for these calculations. In this model runout distances and pressure results are approximately proportional to the mean d0. On the other hand d0 is correlated to the RI which was the runout distance on a statistical variable with a certain probability.

De Quervain (1974) [1] wrote relating to the problem of the determination of XXX possibilities “the entire study area was built on XXX correlations and the question is posed how one can get to a concrete understanding. In the long term the statistics of snowfall data and d0 will contribute data to the desired functions. In addition, snow mechanical experiments can yield more information. Presently one has to rely on experience data of d0 in areas of catastrophes or one has to back calculate them from extreme runout distances. It is important to XXX between snow rich (abundant) and snow poor areas in terms of d0 of avalanches within the same RI class. Or one considers such a variation within one climatic region with avalanches with varying RI as being treated.

Today it is possible to differentiate quantitatively between snow rich and snow poor climate regions. The base assumption which contributes to the determination of d0 are shown in this report.

By appropriately choosing d0 in different climatic regions in the Swiss Alps, the same hazard scales (classes) can be obtained in all hazard maps and constructive avalanche mitigation measures. This is a manifested in the manual for calculation flow avalanches for practicioners.

2. Assumptions and assessment of d0

The main influencing factors of d0 are arroding to [3]:

a. New Snow: the climatically possible precipitation in 3 subsequent days is considered (fig1). Consideration longer precipitation periods has been found to be not significant. it is also assumed that large avalanches - in comparison to sl called skier avalanches - are always triggered by precipitation events and that basically it is only new snow that glides off. Investigations have confirmed this, although scarring in old snow are possible. In addition RIs of new snow increases are set equal to those of the particular avalanches (conservative assumption)

b. Slope: the shear strength limites the amount of new snow deposited on a slope. Before the climatically possible preipitation is reached the snow glides off. The strength is assumed according to Coulomb-Mohr (3.3), The strength should be greater for larger precipitation events. Otherwise, considering a constant strength - d0 would be constant at constant slope, independent from RI and climate.

c. Drift snow: On less slopes the new snow measured XXX d0 are locally increased due to wind transport (consider the predominant wind direction during precipitation events). The mean d0 (take the mean over the whole scarp area, does usually not correspond to a mean d0 along an onserved scarp line) which is determined perpendicular to the slope by:

(Eq 1)

In summary d0 is a factor consisting of climate, snow mechanics and topography.

3. Base Data

The most important basis are snow data from comparative stations and measurement stations which as been obtained and analysed in the Swiss Alps. The snow data is collected over periods between 20-60 years. The base data are obtained from horizontal measurement fields (altitude 1100-2000 m asl). How does one obtain d0 with varying RI and climatic region by using the basic data?

3.1 3-day delta HS - increase - values (climatically possible snow height increase)

The data of the various comparative stations and measurement stations were analysed by the Institute of Geography, University of Bern [4] under the guidance of the Institute SLF until 1982. From this year on they are reported and uopdated by the SLF - Section I (section Fohn).

The delta HS basic data equal a 3-day snow thickness increase on horizontal fields. Extrapolation of 30-60 yr increase data allows gumbel extreme value statistics.

In addition a delta HS-altitude gradient must be recognized: the mean altitude of the measurement fields and compartive stations between 1100 and 2800 m als (mean 1600 m). On the other hand the avalanche scarps are located higher in that a delta HS altitude gradient must be considered. According to [5] page 155 this gradient is between 3-7 XX/100 m altitude, depending on the region. We suggest a value 5 XX/100 m. In addition we suggest a reference height of 2000 m asl because the impact of wind deposition in higher areas and tehrefore drift snow must be considered (d0* is the mean value over the whole scarp area).

3.11 Determintation of factor F (30/300)

The 3-day delta HS increase value with RI of T=30 years is a known measurement value. Extrapolation using extreme value statistics with an RI of 300 years shows icnreasing scatter. It must be asked whether 30-60 years time series can indeed be extrapolated to 300 years and how reliable are these values. In general it can be assumed taht one cannot exceed the period of measurement by a factor of 3. Despite the short period of measurement we are forced to do the extrapolation. XXX (>66 stations) of the 3-day delta HS increase values. T=30 yearse and T=300 years yields a ratio F of the 300 : 30 year value between 1.30 and 1.45 (E_mean ~= 1.37). In addition no regional significant difference are discernible. For further observationswe consider F=1.4. In the following tables A and B the largest and smallest increase values are summarised as well as the consideration per station in a given altitude for T=30. These values are subsequently extrapolated with the factor 1.4 to 300 year values.

3.2 Determination of basic value d0* for slope = 28 degrees. Delta HS increase are perpendicular (to the horizontal field) measured and extrapolated. For avalanches - technical calculations values must be used perpendicular to the slope (fig 1). Determining the potential scar area between 28-50 degrees we determine the base value d0* for mu=28 degrees by assuming that as much snow can accummulate on a flat surface as on a 28 degrees slope. With an increase in slope a slope dependency f (mu) must be considered (section 2).

3.3 Determination of slope dependency

The slope dependency is included by considered the slope factor f (mu). A general criteria for the determination of stability is (yet) unknown. In conjunction with soil mechanics the simple Coloumb-Mohr criteria is used. A scar develops when the stability S <_ 1, with s being the ration between strength of a weak intermediate layer and the shear stress.

According to experience, the values c and f for d0 are assessed:

EQ

For cohesion it is postulated that

EQ

For the angle of internal friction we write

EQ

It is therefore considered independently of d0* since it is probably dependent on the grain form in the glide later (roughness of the scarp area).

in comparison to measured values the data by Roch (1966) [8] are available. He conducted shear box measurements with varying weight. In the important shear from 1-2 kN/m2 the following values were determined:

The assumed value for tg mu = 0.202 is known. The values by Roch seem rather large. Since “snow board” avalanches can break off at 25 degrees, mu must be <25 degrees in some cases. A smooth and fine grained scarp area has been assumed. Roch’s measurements indicate an increasing tendency of mu with increasing grain size.

4.3 Compilation of flow avalanche - a manual for practicitioners with examples.

The valid values are shown in detailed in SLF paper 47 [2] with the title above. They are the topic of this report.

5. Remarks

The table below shows the differences between the valid base values d0* and d0, respectively, and the ones by Salm (1989) and de Quervain (1979/80).

The partly large differences in the determination of d0 could lead to the conclusion that all avalanche hazard maps must be revised. This is in general not true. One has to conisder that avalanche tecchnical calculations are always improved. Besides the reliable determination of mean d0, roughness paramters epsilon (ms-2) and mu [1] were adjusted after field experiments. The factor of turbulent friction epsilon (ms-2) was increased from about 500-1000. this results in an increase avalanche velocity which compensates for the decrease of the 300 year d0 values. In summary we state that today the determination of the 30 year avalanche boundary is stricter. In addition there are no distinct differences in the determination of the 300 year avalanche boundary.