X
PrevPrev Go to previous topic
NextNext Go to next topic
Last Post 29 May 2006 08:27 AM by  anon
Mapping a 2 to 4 GB 32 bits words vector
 1 Replies
Sort:
You are not authorized to post a reply.
Author Messages

anon



New Member


Posts:
New Member


--
29 May 2006 08:27 AM
    Hi every one I am trying to create a kind of map from a VERY long vector of 32 bits words (up to 4 GB ! The location of each event is coded in the two less significant bytes. I would like to know how to proceed to create an array which could be assimilate as a map regarding the 16 less significant bits (each combinaison is a pixel in fact...) and without crashing IDL !

    Deleted User



    New Member


    Posts:
    New Member


    --
    29 May 2006 08:27 AM
    It sounds to me like your issue is with size of memory, not with mapping. Mapping should be easy, if your data is in the least significant two bytes. The following three code lines should demonstrate: IDL> x = ulindgen(65536) ; 32-bit values IDL> y = uint(x) ; 16-bit conversion IDL> print, array_equal(x, y) ; Are the values of these two arrays identical? 1 If you are working with IDL for a 32-bit operating system like IDL on most Windows, Mac OS X or Linux installation, then you will find it difficult to assign a variable more than 1 GB of memory, i.e. max 500,000,000 short integer elements or 250,000,000 long integer elements. There is a Tech Tip at RSI's website (#3346 - "Overcoming Windows Memory Allocation Limitations ") at URL: http://www.ittvis.com/ser...echtip.asp?ttid=3346 which explains the issue. Although its examples are from Windows, its explanations are relevant also to IDL running on any other 32-bit operating systems. So, if you really need to process what sounds like a 1 billion element long integer array, then you will need to consider whether you can process it in IDL in subsets, saving intermediate results out to the operating system. If you CAN do this and your subsets are very large, I suspect that the optimal strategy would be to do all your processing in equal-size blocks. Declare the large array blocks at the beginning of your program, then keep reusing the same block by assigning a pointer to it, like: pLongIntBlock = ptr_new(lonarr(100000000)) pShortIntBlock = ptr_new(uintarr(100000000)) for i = 0, 9 do begin ; Process 1 billion elements in 100MM element blocks readu, lun, *pLongIntBlock ; Assuming you are importing data from a simple binary file *pShortIntBlock = uint(*pLongIntBlock) ; ... do some processing on *pShortIntBlock writeu, lun2, *pShortIntBlock endfor I think that's the most reasonable approach. Hope I did not go off on a wild tangent. James Jones
    You are not authorized to post a reply.