More

Create a raster with fixed 10meter pixel scale in PostGIS


I have a series of points that I would like to convert to a raster, and then calculate the area of the raster as a proxy for "amount of area covered". I'm trying to do this using PostGIS rather than another language so that I can share my query easily with collaborators.

One annoyance is that my points are stored as json (not geojson).

id | points ---+-------- 0 | [{'lat': 33.18424, 'lon': -100.0428, 'property': 'Group A'}, {'lat': 33.18424, 'lon': -100.0428, 'property': 'Group A'},… 1 | [{'lat': 33.18424, 'lon': -100.0423, 'property': 'Group A'}, {'lat': 33.18424, 'lon': -100.0458, 'property': 'Group A'},… 2 | [{'lat': 33.18436, 'lon': -100.0422, 'property': 'Group B'}, {'lat': 33.18420, 'lon': -100.0428, 'property': 'Group B'},…

So I have to do a little bit of work to get my points in a workable format.

SELECT ST_SetSRID(ST_Point((latlon->'lon')::text::numeric, (latlon->'lat')::text::numeric), 4326)::geometry as geom FROM (SELECT JSON_ARRAY_ELEMENTS(points::json) as latlon FROM my_table) as my_table

This works fine, but now I am having some trouble gettingST_AsRasterto work for me. I want the pixels to represent a 10mx10m area however there are so many variants ofST_AsRasterI am not sure how to specify this correctly. Here is my full query:

WITH my_geom AS (SELECT ST_SetSRID(ST_Point((latlon->'lon')::text::numeric, (latlon->'lat')::text::numeric), 4326)::geometry as geom FROM (SELECT JSON_ARRAY_ELEMENTS(points::json) as latlon FROM my_table) as my_table) SELECT ST_AsRaster(geom, scalex=10.0, scaley=10.0) as my_raster FROM my_geom

I think I may need to transform the point geometry SRID or otherwise somehow specify units (meters) forST_AsRasterbut I am not sure this is possible.


I've found the best way to create a raster feature is to use a template raster. The template raster should have the same extent as the union of the features you want to rasterise, and each pixel the desired dimensions.

I'll copy some code from a project I recently worked on, where I converted very large, irregular vector features into raster objects.

First, I created amask_raster(a raster template with pixels of 25x25 metres, and an extent equal to the extent of the union of all of my inputs). This makes for large rasters in many cases, but it is easier if you then want to do some calculations on overlapping rasters. The syntax is:

ST_MakeEmptyRaster(integer width, integer height, float8 upperleftx, float8 upperlefty, float8 scalex, float8 scaley, float8 skewx, float8 skewy, integer srid=unknown);

And my code was:

CREATE TABLE mask_raster AS ( SELECT ST_MakeEmptyRaster(5861, 4141, 1735076, 5390595, 25, 25, 0, 0, 2193) AS rast ); UPDATE mask_raster SET rast = ST_AddBand(rast,'1BB'::text,0); --Set constraints SELECT AddRasterConstraints('mask_raster'::name, 'rast'::name);

You'll need to read the documentation for these functions to determine appropriate parameters, such as the pixel type. I used1BBbecause I was making simple Boolean surfaces of presence and absence, rather than holding quantitative values.

Later, I used this template as an input to theST_AsRasterfunction. The template raster thereby gives its extent and pixel dimensions to your conversion of vector features. My command was:

WITH ra AS ( WITH mb AS ( SELECT isogeom, (SELECT rast FROM mask_raster) AS rast --Holds a null raster built from envelope of all MBs --This controls the resolution, origin, size of the output raster FROM mb_isochrones WHERE source = 'RTI' AND mb2013 = '%s' AND time = '07:15:00'::time AND date >= '%s'::date --start date for loop AND date < '%s'::date --end date for loop ORDER BY date ) SELECT rast, ST_AsRaster(isogeom, rast, ARRAY['1BB'], ARRAY[1], ARRAY[0], true) AS rast2 FROM mb) SELECT ST_MetaData(ra.rast2), ST_MetaData(ra.rast), ST_SameAlignment(ra.rast, ra.rast) as sm, (ST_DumpValues(ra.rast2,ARRAY[1],false)).* FROM ra;

I presented the whole query to give some context. (Using psycopg2, I was also then dumping the raster features to arrays to work with them in NumPy… PostGIS raster is surprisingly difficult). The important part of the above query for your purposes are:

(SELECT rast FROM mask_raster) AS rast

and

ST_AsRaster(isogeom, rast, ARRAY['1BB'], ARRAY[1], ARRAY[0], true) AS rast2

In the line just above, the ST_AsRaster function usesrast, themask_rasteras a template. That's the easiest way to handle this, in my experience. Failure to use a mask raster layer can lead to misalignment problems if you're making many raster features.


Curve Fit: a pixel-level raster regression tool for mapping spatial patterns

Geographic patterns can change through time and/or across space, and these changes can lead to differences in the movement pattern and body condition of organisms, their interactions with each other and their environment, and ultimately lead to population and community-level changes (Turner 1989 ). When quantifying landscape patterns using remotely sensed data, it is important to recognize that each pixel (i.e. picture element) has a temporal and spatial context. A pixel's temporal context refers to its past and present classification and is often used to model landscape change at the finest possible resolution (e.g. Baker 1989 ). The spatial context of a pixel depends on the classification of neighbouring pixels, and the size of the area considered as the neighbourhood and forms the basis of morphological spatial pattern analysis (MSPA, Vogt et al. 2007 Riitters et al. 2007 )—the reclassification of pixels based on the role they play in their neighbourhood. Despite the fact that pixels are the basic unit of a map and that they have a spatial and temporal context, we are unaware of any software package or tool that can be used to quantify changes that take place at the pixel level with changes in time or spatial scale.


Animate pixels (time series) of a raster

I am new to blender and here for a very specific task. I have a fixed geographic area defined by tif image (geotiff). Each pixel of this image has a height value. I have 100 such images of the same area, but value of pixel varies.

Case 1: Following a tutorial as here (Raster extrusion ) I am able to extrude pixel height for each image.

Now, I would like to add all these images as a sequence with increasing and decreasing heights. And make an animation to export to fbx animation.

Case 2: The second option i thought of is: I would like to animate these pixies based on the values they hold. For example I just keep one image, and for each pixel of this image be able to read a time series data file at different time frames. That is, imagine a text file having pixel ID, pixel height. So timeframe one is value v1, timeframe two is value v2. till value v100. So the extrusion of pixel value varies on this sequence of values coming from a text file.

I feel the second case will look more elegant because then blender can interpolate pixel extrusion values between these timeframes. And again, I would like to export it as fbx animation that can be imported to unity.


NJDEP 10-meter Digital Elevation Grid of the Barnegat Bay Watershed Management Area (WMA 13)

I. Description of Data to be Provided
The data provided herein are distributed subject to the following conditions and restriction .

For all data contained herein, (NJDEP) makes no representations of any kind, including, but not limited to, the warranties of merchantability or fitness for a particular use, nor are any such warranties to be implied with respect to the digital data layers furnished hereunder. NJDEP assumes no responsibility to maintain them in any manner or form.

1. Digital data received from the NJDEP are to be used solely for internal purposes in the conduct of daily affairs.

2. The data are provided, as is, without warranty of any kind and the user is responsible for understanding the accuracy limitations of all digital data layers provided herein, as documented in the accompanying Data Dictionary and Readme files. Any reproduction or manipulation of the above data must ensure that the coordinate reference system remains intact.

3. Digital data received from the NJDEP may not be reproduced or redistributed for use by anyone without first obtaining written permission from the NJDEP. This clause is not intended to restrict distribution of printed mapped information produced from the digital data.

4. Any maps, publications, reports, or other documents produced as a result of this project that utilize NJDEP digital data will credit the NJDEP Geographic Information System (GIS) as the source of the data with the following credit/disclaimer: This (map/publication/report) was developed using New Jersey Department of Environmental Protection Geographic Information System digital data, but this secondary product has not been verified by NJDEP and is not state-authorized.

    Browse Graphic File Name : http://www.state.nj.us/dep/gis/digidownload/images/wmalattice/wma13lat.gif
    Browse Graphic File Description : 10-meter Digital Elevation Grid of the Barnegat Bay Watershed Management Area (WMA 13)
    Browse Graphic File Type : GIF

Data Quality Information

    Attribute Accuracy :
      Attribute Accuracy Report :
        The accuracy of a DEM is dependent upon the level of detail of the source and the grid spacing used to sample that source. The primary limiting factor for the level of detail of the source is the scale of the source materials. The original source data, supplied at 10-meter x 10-meter cell size by USGS, was edgematched and hydrologically enhanced. The proper selection of grid spacing determines the level of content that may be extracted from a given source during digitization.
        The fidelity of the relationships encoded in the data structure of the DEM are automatically verified using a USGS software program upon completion of the data production cycle. The test verifies full compliance to the DEM specification.
        The DEM is visually inspected for completeness on a DEM view and edit system for the purpose of performing a final quality control and if necessary edit of the DEM. The physical format of each digital elevation model is validated for content completeness and logical consistency during production quality control and prior to archiving in the National Digital Cartographic Data Base. Due to the variable orientation of the quadrilateral in relation to the Universal Transverse Mercator (UTM) projection grid, profiles that pass within the bounds of the DEM uadrilateral, may be void of elevation grid points, and are not represented in the DEM. This condition occurs infrequently and is always the first or last profile of the dataset. Level 2 DEM: Level 2 DEM's may contain void areas due to interruptions to contours in the source graphic or DLG. Void area elevation grid posts are assigned the value of -32,767. In addition, suspect elevation areas may exist in the DEM but are not specifically identified. Suspect areas can be located on the source graphic as a "disturbed surface, " symbolized by contours overprinted with photorevised or other surface patterns.
        Horizontal Positional Accuracy :
          Horizontal Positional Accuracy Report :
            The horizontal accuracy of the DEM is expressed as an estimated root mean square error (RMSE). The estimate of the RMSE is based upon horizontal accuracy tests of the DEM source materials which are selected as equal to or less than intended horizontal RMSE error of the DEM. The testing of horizontal accuracy of the source materials is accomplished by comparing the planimetric (X and Y) coordinates of well-defined ground points with the coordinates of the same points as determined from a source of higher accuracy.

          DEM's are edited to correctly depict elevation surfaces that correspond to water bodies of specified size.

          Level 1 DEM: A RMSE of 7-meters or less is the desired accuracy standard. A RMSE of 15-meters is the maximum permitted. A 7.5-minute DEM at this level has an absolute elevation error tolerance of 50 meters (approximately three times the 15-meter RMSE) for blunder errors for any grid node when compared to the true elevation. Any array of points in the DEM can not encompass more than 49 contiguous elevations in error by more than 21 meters (three times the 7-meter RMSE). Systematic errors that are within stated accuracy standards are tolerated.

            Source Information :
              Source Citation :
                Citation Information :
                  Originator : USGS
                  Publication Date : Unknown
                  Title : USGS 10-meter DEMs
                  Geospatial Data Presentation Form : raster digital data
                  Publication Information :
                    Publication Place : Sioux Falls, SD
                    Publisher : USGS
                    Time Period Information :
                      Single Date/Time :
                        Calendar Date : Unknown
                        Time of Day : Unknown

                      Level 2 DEM: Level 2 DEM's are produced by converting 1:24,000-scale and 1:100,000-scale hypsography digital line graph (DLG) data to DEM format or the DEM's are generated from vector data derived from scanned raster files of USGS 1:24,000-scale or 1:100,000-scale map series contour separates.

                      Level 3 DEM: Level 3 DEM's are created from DLG data that has been vertically integrated with all categories of hypsography, hydrography, ridge line, break line, drain files and all vertical and horizontal control networks. The production of level 3 DEMs requires a system of logic incorporated into the software interpolation algorithms that clearly differentiates and correctly interpolates between the various types of terrain, data densities and data distribution.

                      Water body editing: DEM surface areas corresponding to water bodies are flattened and assigned map specified or estimated surface elevations. Water body areas are defined as ponds, lakes, and reservoirs that exceed 0.5 inches at map scale and double line drainage that exceeds 0.25 inches at map scale. Water body shorelines are derived either from a hypsographic DLG or by interactive delineation from 1:24,000-scale or 1:100,000-scale USGS map series.

                      Edge matching: DEM datasets within a project area (consisting of a number of adjacent files) are edge matched to assure terrain surface continuity between files. Edge matching is the process of correcting adjacent elevation values along common edges. The objective of edge matching is to create more accurate terrain representations by correcting the alignment of ridges and drains, and overall topographic shaping within an approximately 25-30 row or column grid post zone on both edges.

                      Quality control: DEM's are viewed on interactive editing systems to identify and correct blunder and systematic errors. DEM's are verified for physical format and logical consistency at the production centers and before archiving in the National Digital Cartographic Data Base (NDCDB) utilizing the Digital Elevation Model Verification System (DVS) software.


                      Creating elevation/height field gdal numpy python

                      I figure I'm missing something simple and look forward to your advice.

                      • terragendataset.cpp,v 1.2 *
                        • Project: Terragen(tm) TER Driver
                        • Purpose: Reader for Terragen TER documents
                        • Author: Ray Gardener, Daylon Graphics Ltd. *
                        • Portions of this module derived from GDAL drivers by
                        • Frank Warmerdam, see http://www.gdal.org

                        My apologies in advance to Ray Gardener and Frank Warmerdam.

                        For writing: SCAL = gridpost distance in meters hv_px = hv_m / SCAL span_px = span_m / SCAL offset = see TerragenDataset::write_header() scale = see TerragenDataset::write_header() physical hv = (hv_px - offset) * 65536.0/scale

                        This tells me that prior to my above WriteArray(somearray) I have to set both the GeoTransform and SetProjection and SetUnitType for things to work (potentially)

                        From the GDAL API Tutorial: Python import osr import numpy

                        My apologies for creating an overly long post and a confession. If and when I get this right, it would be nice to have all the notes in one place (the long post).

                        I have previously taken a picture (jpeg), converted it to a geotiff, and imported it as tiles into a PostGIS database. I am now attempting to create elevation raster over which to drape the picture. This probably seems silly, but there is an artist whom I wish to honor, while at the same time not offending those who have worked assiduously to create and improve these great tools.

                        The artist is Belgian so meters would be in order. She works in lower Manhattan, NY so, UTM 18. Now some reasonable SetGeoTransform. The above mentioned picture is w=3649/h=2736, I'll have to give some thought to this.

                        Clearly getting closer but unclear if the SetUTM(18,1) was picked up. Is this a 4x4 in the Hudson River or a Local_CS(coordinate system)? What's a silent fail?

                        4x4 meters is a pretty small logical span.

                        So, perhaps this is as good as it gets. The SetGeoTransform gets the units right, sets the scale and you have your Terragen World Space.

                        Final Thought: I don't program, but to an extent I can follow along. That said, I kinda wondered first if and then what the data looked like in my little Terragen World Space (the following thanks to http://www.gis.usu.edu/

                        So this is gratifying. I imagine the difference between the above used numpy c and this result goes to the actions of applying Float16 across this very small logical span.

                        Nearly completely gratifying, though it looks like I'm in La Concordia Peru. So I have to figure out how to say- like more north, like New York North. Does SetUTM take a third element suggesting 'how far' north or south. Seems I ran across a UTM chart yesterday that had letter label zones, something like C at the equator and maybe T or S for New York area.

                        I actually thought the SetGeoTransform was essentially establishing the top left northing and easting and this was influencing the that 'how far north/south' part of SetUTM. Off to gdal-dev.

                        Paddington Bear went to Peru because he had a ticket. I got there because I said that's where I wanted to be. Terragen, working as it does, gave me my pixel space. The subsequent calls to srs acted upon the hf2 (SetUTM), but the easting and northing were established under Terragen so the UTM 18 got set, but in a bounding box at the equator. Good enough.

                        gdal_translate got me to New York. I'm in windows so a command line. and the result

                        So, we're back in NY. There are probably better ways to approach all this. I had to have a target that accepted Create as I was postulating/improvising a dataset from numpy as well. I need to look at other formats that allow create. Elevation in GeoTiff is a possibility (I think.)

                        My thanks to all Comments, suggestions, and gentle nudges toward appropriate reading. Making maps in python is fun!


                        2 Answers 2

                        OK - no blowing up - but a simple basic comment:

                        If you downscale a raster graphic (made of pixels) this doesn't make the pixels smaller, this decreases the number of pixels you use in your raster to make up the graphic.

                        Hence your intended target size - 64x32 - is a per-edge-pixel-COUNT - as in, it will be 64 pixels wide by 32 pixels tall.

                        This means that you are going the other direction from your request - that is, the pixel-density (aka resolution) is going down as you scale your graphic, not up.

                        The use case in which shrinking an image down pushes up the pixel density is printing, in which yes, you can push the pixel density upwards to whatever the high end limit of the target printer is - often in excess of 300 DPI, some go well over 600 DPI.

                        If you scale your image in Photoshop, Gimp, Affinity Photo etc, but keeping in the same 72 PPI for-web file, you will see it resample the image once scaled and see the resolution drop drastically similarly, if you export out from your native file as .png and force the export dimensions down to your target size, then open the resulting file in an image viewer or Gimp, you will see directly what happens: giant pixels relative to your image - you know - Minecraft style!

                        Maybe this will help this make more sense - I threw it together pretty quickly, but I think it should do:

                        Here's my image at original resolution (10x16 pixels) the red grid marks my pixels:

                        I downscale the image to 5x4 pixels, and in this process the pixels remain the same size (cos that's what pixels do - they're always the same size on a given display) and the software analyzes groups of four pixels to determine the new pixel in that relative location - hence the loss of details and subtlety of colour treatments when you drop pixel counts.


                        Parallelization

                        Crude parallelization improves performance slightly, but it does not scale linearly with number of cores. On my 24 thread machine, the following yields a 30-40% improvement compared to the loop WritableRaster variant:

                        Since you are constructing a full, 32-bit word from random bits, you could skip the niceties and simply generate a 32-bit random number as your p. What is unclear is how the edge cases would behave. For instance, I would have used an unsigned int instead of an int to hold your four values (0-255), as the leftmost one (R) might mess with the sign of p. etc. That should go roughly 4x faster than using 4 calls and multiplies for each pixel.

                        Also, do you really need 720 million different random values? or does it just need to look real random? You could have an array of say 1,000,000 pixels, instead of p, have p[1000000]. That's a pretty fast gen, 1M v 720M. Then, simply generate random values between 0 and 999999, do your img.setRGB() with random selections from that pallette. And yes, that is no faster than the above code. But consider NOT generating random values from 0-999999. Consider, instead, taking these already-random pixels in a loop, from first to last, and then repeating that loop, over and over, as you progress through your image. Sure it will "repeat" but visually, it will be kind of hard to see a pattern.

                        As a further alternative, having very quickly generated a million random pixels, fill your image array by doing the following:


                        Conversione da raster a vettore¶

                        In our discussion of vector data, we explained that often raster data are used as a backdrop layer, which is then used as a base from which vector features can be digitised.

                        Another approach is to use advanced computer programs to automatically extract vector features from images. Some features such as roads show in an image as a sudden change of colour from neighbouring pixels. The computer program looks for such colour changes and creates vector features as a result. This kind of functionality is normally only available in very specialised (and often expensive) GIS software.


                        1 Answer 1

                        First, a little clarification:

                        the Pixels per Unit setting controls the size of the geometry the image is rendered on. It does not rescale the raster image asset in memory or on the disc, like you might be used to when resizing an image in Photoshop or similar programs. Those operations can take some time, because the program needs to walk through every pixel in the new image and calculate its colour by sampling the corresponding pixels in its previous incarnation.

                        By contrast, the scaling that Pixels per Unit does is dirt cheap, operating on a handful of transformation matrices or vectors that the game was going to have to process anyway when sending them to the GPU to render each frame.

                        So, you shouldn't worry about the process of scaling itself causing a substantial performance impact.

                        Where pixels per unit can impact performance is in how the resulting size of your scene objects plays with Unity's acceleration structures for rendering and physics. Often these are tuned to assume that the stuff you're interacting with primarily - your player characters, trigger volumes, map tiles, etc - are on a scale between tenths to tens of units in size. If we think of a unit as a meter, then this lets us model interactions with anything from house pets to jumbo jets.

                        Keeping a pixels per unit setting tuned so that your gameplay happens in this size range lets these systems do their work smoothly. It can also help make your math easy as a developer, sticking to small numbers that are simple to work with and remember ("Map tiles are 1 unit, the player's dash move travels 3 units" etc).

                        You'll see an occasional tutorial though that says you should set your pixels per unit to 1, so that you can work in pixel coordinates for everything. I do not recommend this approach. Doing it this way, a single screen at 1080p spans 2000 game units, and most objects will be hundreds of units across. This can cause systems like physics to behave poorly because it's outside the range they've been tuned for.

                        So here again, the scaling itself isn't a significant performance sink, but the consequences it has on the size of your game interactions can be.


                        Abstract

                        Variable renewable energy is set to become a key energy source worldwide, but there is concern regarding the impact of the intermittency of its output when penetration is high. Energy system models need to tackle this issue by improving modelling resolution and scope. To allow for such modelling, more and better input datasets are needed on variable renewable energy potentials and yields. These need to be of global scope, of sufficient spatial and temporal resolution, and generated with transparent, consistent methods. This study develops the methods and applies it to generate these datasets at subnational and hourly resolution. The assessment is carried out for wind and solar technologies with consistent constraints including geographical, social and economic aspects. Features from the OpenStreetMap are converted into land cover and land use datasets and applied. Hourly energy output is simulated using NASA MERRA-2 meteorological datasets, reconciled with resource maps from the Global Wind Atlas and Global Solar Atlas platforms. Capacity supply curves are provided for 731 terrestrial zones and 339 offshore zones worldwide, along with corresponding hourly output profiles over a 10-year simulation period. The proposed energy potentials are relative conservative compared with other studies. The datasets can serve as input for regional or global energy system models when analyzing high variable renewable energy shares.


                        Watch the video: 0104 Create raster from text file (October 2021).