Recent years have seen an increase in the use of social media for various decision-making purposes in the context of urban computing and smart cities, including management of public parks. However, as such decision-making tasks are becoming more autonomous, a critical concern that arises is the extent to which such analysis are fair and inclusive. In this article, we examine the biases that exist in social media analysis pipelines that focus on researching recreational visits to urban parks. More precisely, we demonstrate the potential biases that exist in different data sources for estimating the number and demographics of visitors through acomparison of image content shared on Instagram and Flickr from 10 urban parks in Seattle, Washington. We draw a comparison against a traditional intercept survey of park visitors and a multi-modal city-wide survey of residents. We evaluate the viability of using more complex AI facial recognition algorithms and its capabilities for removing some ofthe presented biases. We evaluate the AI algorithm through the lens of algorithmic fairness and its impact on sensitive demographic groups. We show that despite the promising results, there are new sets of concerns regarding equity that arise when we use AI algorithms.