I have been tracking Facebook closely. The recent release of their 2 trillion index is not of web content but actually comprised of public Facebook posts by users (much could be of commentary on web content). 

This means that for the first time through Facebook we can search public posts about a given topic to see pubic opinion. Facebook already generates 1.5B searches daily so it is a smart play to leverage more of the content they already have to grow their market share further. 
It’s like the analogy I gave, everyone wants to be the concierge. If they are asking you questions you gain their trust but more importantly you get to leverage/monetize the relationship. 
Realtime search or leveraging social content in search is not a new concept, right now in Google if you conduct a search, Twitter often shows up in the SERPs with realtime Tweets related to the query. This is Googles attempt to battle stale content.  

Relevance is contextual; if a query is on “Who was the first president of the US” any concierge, including printed media, can answer this. However the majority of search queries are skewing towards responses using realtime data. 
In the example I have a our the concierge of a hotel who you ask where is a good place to eat:

– Concierge A tells you about a place you find out is dated, the food is boring or even you go visit but it was shut down.   

– Concierge B tells you about this hip new popup restaurant only open tonight for one night with an up and coming chef servings something wild and new. 
Clearly example B is who gains your trust. The same is true with search engines.  
Google has addressed this years ago with algorithm tweaks such as QDF (Quality Deserves Freshness), integrating Google News feeds (Much fresh content is not discovered in a crawl but instead fed via RSS or similar) and again now adding Twitter back into the news feed.  
Now, a quick look back will give us context to look forward: 
Understand that the web was very static on its inception, much was digitized print and hard copy books online. Then blogs and QA sites became popular comprising most of the web content. Search engine algorithms weighted heavily fresh blog and QA links. Then Facebook and Twitter hit the scene where less content was published to blogs and more was just spit out in realtime. Now, by the time it takes to write a blog post the rest of the savvy world has already facebooked/tweeted/scoped about it.  
Publishers are finding that the only way to compete is to leverage AI journalists. Not even joking here, did you know that much of the content now generated by the AssociatedPress is by Automated Insights “Robot Journalism”. It’s a thing: http://www.poynter.org/news/mediawire/379809/with-new-product-automated-insights-hopes-to-make-robot-journalism-cheaper-and-more-plentiful/
So he who holds realtime “structured” content gains a foothold and competitive advantage in the competition for our time and gets to leverage us for their advertising dollars.