“What about the inside?” Jill asked.
“What about it?” replied Jack.
“Well, we have maps, aerial photos, GPS navigation gizmos, 3D buildings, even 3D models of canyons that we can use and browse on the web—BUT, these are for the outdoors only.”
“Uh, yeah… What’s wrong with that?”
“Well, everything’s for the outside. What if I want to SEE what’s inside a store? An apartment for rent? I want to see what that new restaurant looks like inside before making a trip down,” said Jill.
“Well, you can read about places, and see photos online. You know, the whole Web2.0 thing.”
“Yeah, they help. Well, but somehow photos or videos of interiors don’t quite work for me. They are just so…”
“Um… I guess that makes sense. Those tech folks can already do that, right? Or at least trying to figure it out, I hope.”
There is a plethora of technology to digitally capture, store, analyze, and manage the information which are spatially referenceable—but mostly for the outdoors. From GPS to aerial photography to web-enabled interactive maps, technology for capturing, displaying, and even sharing the outdoor world has been proven to be useful and widely adopted.
On the other hand, the technology for the indoor spaces is lagging behind. Manual drawings of floor plans and basic footprints are typical; other representations, such as 3D modeling of interiors, are difficult, costly, crude, and mostly nonexistent; aerial photos are not useful for interior spaces; digital photos are abundant, but often lack the context, consistency, and continuity to better understanding the captured space. Content and technology for the interiors are not as well established as their outdoor counterpart, and yet, their value and need have always been present.
So, what gives? Why are the indoors lagging behind? How can they catch up? (Can they catch up?) How can the insides be as useful and widely adopted as the outsides? There definitely are many questions that follow Jill’s initial question: What about the inside?
To better answer Jill’s question, we focus on discussing three topics: capturing, displaying, and sharing of the visual geospatial information. More specifically, we focus on the solution and direction chosen by EveryScape and discuss reasons why certain choices were favored, and other paths not taken. We touch upon some trials and tribulations of a startup, and the necessity to innovate in a world where 800-pound gorillas rule.
GIS is typically defined as a system for capturing, storing, analyzing, and managing data and associated attributes that are spatially referenced to the Earth. In the current web world, there are other needs, such as displaying and visualization, integration, editing, sharing, among others. Although all of these topics may be addressed, we focus on capturing, displaying, and sharing for sake of time and interest.
Capturing generally entails gathering of geographically referenceable information. For example, surveying techniques have been around for thousands of years to determine property lines and borders. In recent times, capturing technology for the outdoors has been progressing for decades and the advancements have simply been incredible. The satellites have changed the game in how we both visually capture (e.g., aerial photographs), and locate/track things that are outdoors.
On a related note, in 2002, the digital cameras outsold the film cameras in the U.S., and are becoming more and more integrated into people’s lives and daily behaviors. Digital cameras are embedded everywhere, in phones, laptops, cars, streets, etc., and storage is becoming more affordable. One of the challenges lies in making the information captured by these devices geospatially referenceable, i.e., being able to tag the photos to a specific location and orientation in time.
In this talk, we discuss how the proliferation of the visual, photorealistic recording devices, such as digital cameras, panoramic photography, and even video cameras, can help capture useful visual information of the indoors, and furthermore, making them geospatially referenceable. We’ll discuss how EveryScape converts standard panoramic photography techniques into geospatially referenceable information for the INDOORS, where GPS cannot go. And how the visual information can further be transformed into three-dimensional information to better geospatially orient, locate, and describe the interior spaces, using computer graphics and computer vision techniques.
Once captured, the data must be stored, analyzed, managed, and finally displayed to the user. In EveryScape, we set out a goal to have the following requirements when displaying our data: they must be immersive, interactive, photorealistic, scalable, and distributable to the masses. We chose the panoramic photography and three-dimensional movement animations as connections between the panoramic photographs as the main representation for displaying our information. Panoramic photographs are immersive, interactive, and photorealistic. They visually describe an area in a very convincing manner. To better describe the space, a sense of movement is necessary, especially for the indoors. We discuss how and why we chose such format to efficiently and effectively communicate space in short of full three-dimensional description and display. Scalability and distribution challenges were solved using common digital cameras as capture sources, and regular panoramic photography for display and distribution.
Finally, sharing, community generating, and crowd sourcing in the context of geographical information are becoming more and more important. Being able to not only tag a visual information geospatially but to share the information with others, enabling others to provide input and feedback, not only makes the content and information better, but also more useful, interactive (hence, community generating), and more valuable.
Our strategy is to provide a visual platform from which users can further add visual information to, but also be able to add text, links, photos, audio, and videos. We provide a mechanism to enable users to tell their stories in a geospatial-temporal manner, where users can walk around, annotate, and tell their stories. To enable and empower users with these tools, the inside representation should work seamlessly with the outside representation. In the end, our direction is also to provide users a visual platform from which they can share, crowd source, generate communities, and tell their stories, both for interiors and exteriors.
“I guess, in the end, I want a coherent experience, where I can see online what I will see in real life, not just for the insides, but for the outsides as well,” said Jill.
“That’s a mouthful…” replied Jack.
Mok Oh has over 15 years of experience in computer science and computer graphics R&D. He is the founder of Mok3 Inc./EveryScape.com in 2002, and is the inventor of the technology based on his doctoral dissertation work from the Computer Graphics Group at MIT. In EveryScape, he is responsible for product and technology development, functioning as software architect and leading intellectual property development. He holds multiple patents and publications in image-based modeling, image and photo editing, and 3D-related technologies. His research and development further spans into 3D modeling, ray tracing and light-transport algorithms, interactive tools, and image processing. He was an invited speaker at multiple venues, such as MIT Lecture Series, Harvard School of Architecture, and internationally in various universities in South Korea and in the Asia-Pacific Innovation and Entrepreneurship Conference. Prior to his doctoral work at MIT, he worked for Accenture as an Information Systems Analyst, where he developed business software for AT&T. Mok also earned a Masters of Science and Engineering degree in Computer Information Sciences from University of Pennsylvania, and multiple Bachelor of Arts degrees in Computer Science, Art History and Studio Art from Oberlin College.