3d reconstruction tools

hi everyone.

today, someone linked me the 66hottest windows-tools. i know, those word coming from a linux guy sounds like something ugly is going to happen BUT… not today. i saw this microcrap photoscape application and wondered how comes they have such a cool, yet totaly unknown application? … so i researched and found they just licensed the stuff from i think the university of wahsington or so.

ok so, while microsoft paid $$ for licensing. some part of the code was released under gpl later on… lik in … looks like august 08
http://phototour.cs.washington.edu/bundler/ called bundler. which seems to do the fancy math in the background.
then there was that one guy who started to make a photoscape like application based on bundler-datasets.
http://da.vidr.cc/projects/pixelstruct/ known as pixelstruct.
unfortunately pixelstruct sorta is a memory hog on my machin, eating 1 gig of ram for each image it loads.

so i had this idea… cool reconstruction tool + panda’s browser plugin + maybe some other cool ideas… wouldn’t that be cool?
not only for a virtual 3d photo galery (which alone would be quite cool) but also for 3d reconstructions and stuff.

all in all pretty neat stuff , check the video’s.

anyone intrested? ideas? comments? worth setting up a panda-based bundler-database-viewer?

[EDIT] oh. before i forget about it… http://grail.cs.washington.edu/software/pmvs/
[edit2] oh and check out the gallery from the reconstruted objects… http://grail.cs.washington.edu/software/pmvs/pmvs-1/index.html sorta DEMANDS some cool useage. artist/pipeline tools or so.

UW CompSci :slight_smile: I’ll be in there soon. Just gotta finish my intro programming classes… (Java, ick)

I wondered where the photosynth guy Microsoft bought came from.

Looks interesting.

mhm. i tried to make a small script which reads out the 3dpoint and camera-positions from the bundle-database.
but i sorta dont know how to continue from here. the points all seems to be in one plane. maybe the inputdata is just a bit weired.

just in case anyone has another data-set to test it on.

import direct.directbase.DirectStart
from pandac.PandaModules import Vec3,Mat3,Mat4
inputfile = open("./bundle/bundle.out","r").read()
inputfile= inputfile.split("\n")
while "#" in inputfile[0]:
    inputfile.pop(0)

num_cams,num_points = inputfile.pop(0).split(" ")
num_cams,num_points = int(num_cams),int(num_points)
camlist = []
"""
<f> <k1> <k2>   [the focal length, followed by two radial distortion coeffs]
    <R>             [a 3x3 matrix representing the camera rotation]
    <t>             [a 3-vector describing the camera translation]
"""

 
for i in range(0,num_cams):
    focal, dist1, dist2  = inputfile.pop(0).split(" ")
    focal, dist1, dist2 = float(focal), float(dist1), float(dist2)
    
    r11, r12, r13  = inputfile.pop(0).split(" ")
    r11, r12, r13 = float(r11), float(r12), float(r13)
    
    r21, r22, r23  = inputfile.pop(0).split(" ")
    r21, r22, r23 = float(r21), float(r22), float(r23)
    
    r31, r32, r33  = inputfile.pop(0).split(" ")
    r31, r32, r33 = float(r31), float(r32), float(r33)
    
    x, y, z = inputfile.pop(0).split(" ")
    x, y, z = float(x) , float(y), float(z)
    
    
    camparams= [r11, r12, r13 , r21, r22, r23 , r31, r32, r33,x,y,z]#, focal, dist1, dist2]
    camlist.append(camparams)
   
for i in camlist:
    print i
"""

    <position>      [a 3-vector describing the 3D position of the point]
    <color>         [a 3-vector describing the RGB color of the point]
    <view list>     [a list of views the point is visible in]


The view list begins with the length of the list (i.e., the number of cameras the point is visible in). The list is then given as a list of quadruplets <camera> <key> <x> <y>, where <camera> is a camera index, <key> the index of the SIFT keypoint where the point was detected in that camera, and <x> and <y> are the detected positions of that keypoint. Both indices are 0-based (e.g., if camera 0 appears in the list, this corresponds to the first camera in the scene file and the first image in "list.txt"). The pixel positions are floating point numbers in a coordinate system where the origin is the center of the image, the x-axis increases to the right, and the y-axis increases towards the top of the image. Thus, (-w/2, -h/2) is the lower-left corner of the image, and (w/2, h/2) is the top-right corner (where w and h are the width and height of the image). 
"""
pointlist = []

for i in range(0,num_points):
    x, y, z = inputfile.pop(0).split(" ")
    x, y, z = float(x) , float(y), float(z)
    
    r, g, b = inputfile.pop(0).split(" ")
    r, g, b = float(r) , float(g), float(b)
    
    viewlist = inputfile.pop(0).split(" ")
    viewlistOutput=[]
    
    for i in range(0,int(viewlist.pop(0))):
        camindex   = int(viewlist.pop(0))
        siftIndex = int(viewlist.pop(0))
        x = float(viewlist.pop(0))
        y = float(viewlist.pop(0))
        viewlistOutput.append([camindex,siftIndex,x,y])
        
    point =[ x,y,z ,r,g,b, viewlistOutput]
    pointlist.append(point)
    

for i in pointlist:
    print i
    print "YAY!"
print len(pointlist)

for i in pointlist:
    pointmodel = loader.loadModel("smiley")
    pointmodel.reparentTo(render)
    pointmodel.setScale(1)
    pointmodel.setPos(i[0],i[1],i[2])
    print pointmodel.getPos()
    pointmodel.setColorScale(i[3],i[4],i[5],1)

for i in camlist:
    pointmodel = loader.loadModel("zup-axis")
    pointmodel.reparentTo(render)
    myrotmat= Mat3()
    myrotmat.set(i[0],i[1],i[2],i[3],i[4],i[5],i[6],i[7],i[8] )
    mymatrix = Mat4()
    mymatrix.setUpper3(myrotmat) #thx to craig for the rotation stuff:)
    #mymatrix.translateMat( Vec3(i[9],i[10],i[11]) ) #sorta didnt work. 
    pointmodel.setMat(mymatrix )
    pointmodel.setPos(i[9],i[10],i[11])
run()
    

Thanks for this thread, I was quite surprised. For a project in a course at my university I started implementing something similar a few weeks ago. It is under heavy development right now (so don’t judge just yet). Here is a vid: http://www.filesavr.com/vid1.