Compare commits

..

12 Commits

14 changed files with 331 additions and 321 deletions

3
.gitignore vendored
View File

@@ -12,4 +12,5 @@ dist/
helper
db
report/build
report/build
.DS_Store

View File

@@ -1,29 +1,49 @@
# crosspost
**crosspost** is a browser-based tool designed to support *digital ethnography*, the study of how people interact, communicate, and form culture in online spaces such as forums, social media platforms, and comment-driven communities.
A web-based analytics platform for exploring online communities. Built as a final year CS project at UCC, crosspost ingests data from Reddit, YouTube, and Boards.ie, runs NLP analysis on it (emotion detection, topic classification, named entity recognition, stance markers), and surfaces the results through an interactive dashboard.
The motivating use case is digital ethnography — studying how people talk, what they talk about, and how culture forms in online spaces. The included dataset is centred on Cork, Ireland.
The project aims to make it easier for students, researchers, and journalists to collect, organise, and explore online discourse in a structured and ethical way, without requiring deep technical expertise.
## What it does
- Fetch posts and comments from Reddit, YouTube, and Boards.ie (or upload your own .jsonl file)
- Normalise everything into a unified schema regardless of source
- Run NLP analysis asynchronously in the background via Celery workers
- Explore results through a tabbed dashboard: temporal patterns, word clouds, emotion breakdowns, user activity, interaction graphs, topic clusters, and more
- Multi-user support — each user has their own datasets, isolated from everyone else
By combining data ingestion, analysis, and visualisation in a single system, crosspost turns raw online interactions into meaningful insights about how conversations emerge, evolve, and spread across platforms.
# Prerequisites
- Docker & Docker Compose
- A Reddit App (client id & secret)
- YouTube Data v3 API Key
## Goals for this project
- Collect data ethically: enable users to link/upload text, images, and interaction data (messages etc) from specified online communities. Potentially and automated method for importing (using APIs or scraping techniques) could be included as well.
- Organise content: Store gathered material in a structured database with tagging for themes, dates, and sources.
Analyse patterns: Use natural language processing (NLP) to detect frequent keywords, sentiment, and interaction networks.
- Visualise insights: Present findings as charts, timelines, and network diagrams to reveal how conversations and topics evolve.
- Have clearly stated and explained ethical and privacy guidelines for users. The student will design the architecture, implement data pipelines, integrate basic NLP models, and create an interactive dashboard.
# Setup
1) **Clone the Repo**
```
git clone https://github.com/your-username/crosspost.git
cd crosspost
```
Beyond programming, the project involves applying ethical research principles, handling data responsibly, and designing for non-technical users. By the end, the project will demonstrate how computer science can bridge technology and social research — turning raw online interactions into meaningful cultural insights.
2) **Configure Enviornment Vars**
```
cp example.env .env
```
Fill in each required empty env. Some are already filled in, these are sensible defaults that usually don't need to be changed
## Scope
3) **Start everything**
```
docker compose up -d
```
This project focuses on:
- Designing a modular data ingestion pipeline
- Implementing backend data processing and storage
- Integrating lightweight NLP-based analysis
- Building a simple, accessible frontend for exploration and visualisation
This starts:
- `crosspost_db` — PostgreSQL on port 5432
- `crosspost_redis` — Redis on port 6379
- `crosspost_flask` — Flask API on port 5000
- `crosspost_worker` — Celery worker for background NLP/fetching tasks
- `crosspost_frontend` — Vite dev server on port 5173
# Requirements
# Data Format for Manual Uploads
If you want to upload your own data rather than fetch it via the connectors, the expected format is newline-delimited JSON (.jsonl) where each line is a post object:
```json
{"id": "abc123", "author": "username", "title": "Post title", "content": "Post body", "url": "https://...", "timestamp": 1700000000.0, "source": "reddit", "comments": []}
```
- **Python** ≥ 3.9
- **Python packages** listed in `requirements.txt`
- npm ≥ version 11
# Notes
- **GPU support**: The Celery worker is configured with `--pool=solo` to avoid memory conflicts when multiple NLP models are loaded. If you have an NVIDIA GPU, uncomment the deploy.resources block in docker-compose.yml and make sure the NVIDIA Container Toolkit is installed.

View File

@@ -1,8 +0,0 @@
# Generic User Data Transfer Object for social media platforms
class User:
def __init__(self, username: str, created_utc: int, ):
self.username = username
self.created_utc = created_utc
# Optionals
self.karma = None

Binary file not shown.

After

Width:  |  Height:  |  Size: 274 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

View File

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 50 KiB

BIN
report/img/moods.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
report/img/ngrams.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

BIN
report/img/signature.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -104,3 +104,46 @@
pages = {183--204}
}
@misc{cook2023ethnography,
author = {Cook, Chloe},
title = {What is the Difference Between Ethnography and Digital Ethnography?},
year = {2023},
month = jan,
day = {19},
howpublished = {\url{https://ethosapp.com/blog/what-is-the-difference-between-ethnography-and-digital-ethnography/}},
note = {Accessed: 2026-04-16},
organization = {EthOS}
}
@misc{giuffre2026sentiment,
author = {Giuffre, Steven},
title = {What is Sentiment Analysis?},
year = {2026},
month = mar,
howpublished = {\url{https://www.vonage.com/resources/articles/sentiment-analysis/}},
note = {Accessed: 2026-04-16},
organization = {Vonage}
}
@misc{mungalpara2022stemming,
author = {Mungalpara, Jaimin},
title = {Stemming Lemmatization Stopwords and {N}-Grams in {NLP}},
year = {2022},
month = jul,
day = {26},
howpublished = {\url{https://jaimin-ml2001.medium.com/stemming-lemmatization-stopwords-and-n-grams-in-nlp-96f8e8b6aa6f}},
note = {Accessed: 2026-04-16},
organization = {Medium}
}
@misc{chugani2025ethicalscraping,
author = {Chugani, Vinod},
title = {Ethical Web Scraping: Principles and Practices},
year = {2025},
month = apr,
day = {21},
howpublished = {\url{https://www.datacamp.com/blog/ethical-web-scraping}},
note = {Accessed: 2026-04-16},
organization = {DataCamp}
}

View File

@@ -1,21 +1,18 @@
from abc import ABC, abstractmethod
from dto.post import Post
import os
class BaseConnector(ABC):
# Each subclass declares these at the class level
source_name: str # machine-readable: "reddit", "youtube"
display_name: str # human-readable: "Reddit", "YouTube"
required_env: list[str] = [] # env vars needed to activate
source_name: str # machine readable
display_name: str # human readablee
required_env: list[str] = []
search_enabled: bool
categories_enabled: bool
@classmethod
def is_available(cls) -> bool:
"""Returns True if all required env vars are set."""
import os
return all(os.getenv(var) for var in cls.required_env)
@abstractmethod

View File

@@ -87,7 +87,7 @@ class BoardsAPI(BaseConnector):
post = self._parse_thread(html, post_url)
return post
with ThreadPoolExecutor(max_workers=10) as executor:
with ThreadPoolExecutor(max_workers=5) as executor:
futures = {executor.submit(fetch_and_parse, url): url for url in urls}
for i, future in enumerate(as_completed(futures)):