Disaster response teams require timely and accurate information to prioritize resources and save lives. Social media platforms like Twitter and Instagram provide valuable real-time data during disasters. However, manually processing and filtering large volumes of multimedia posts to extract critical disaster-related content is inefficient. This project aims to develop a system that automatically processes both images and text from social media posts to detect disaster-related information, enabling faster response times and resource allocation.
The project delivers a comprehensive system capable of:
- Detecting disaster-related objects and events in images.
- Analyzing text posts to identify disaster-related information, such as calls for help or damage reports.
- Integrating image and text classification models into a web application for real-time social media post processing.
- Ensuring the system can handle multiple requests simultaneously, providing seamless user interaction and scalability.
Link - Project Documentation
-
Clone the Repository
git clone <repository_url> cd Analysing-Social-Media-Images-and-Text-for-Disaster-Response
-
Download Model Weights
- Download the ResNet model weights from Google Drive link.
- Place the downloaded file in the
models
folder.
-
Set Up Virtual Environment
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install Dependencies
pip install -r requirements.txt
-
Run the Flask Server
flask run
-
Access the Web Application
- Upload social media images and text posts via the web interface.
- The system will analyze the content and provide disaster-related information.
- Monitor the results and adjust the system settings as needed.
- Fork the repository and make your changes.
- Submit a pull request with a detailed description of your modifications.
This project is licensed under the MIT License. See the LICENSE file for details.
For questions or feedback, please contact Rakshit Garg.