:5000`. You can control what the system says from the controller as well!
@@ -260,52 +100,54 @@ For Part 2, you will redesign the interaction with the speech-enabled device usi
## Prep for Part 2
1. What are concrete things that could use improvement in the design of your device? For example: wording, timing, anticipation of misunderstandings...
+
+The biggest improvement would be giving the therapist better wording and more shared context about the user’s situation, which would make the interaction feel warmer and more personal. We also felt that adding a visual element could help, giving the therapist a clearer and more human-like presence.
+
2. What are other modes of interaction _beyond speech_ that you might also use to clarify how to interact?
+
+Adding a visual extension to the therapist would help make the system feel more human-like and easier to talk to.
+
3. Make a new storyboard, diagram and/or script based on these reflections.
+ Initial prototype with Gemini and refined version:
+
+
## Prototype your system
-The system should:
-* use the Raspberry Pi
-* use one or more sensors
-* require participants to speak to it.
+For context, we used a file called memories.txt. The idea is that the ollama model would look at this file each time it replies to the user. This way, it can “remember” past conversations and personal details without needing a big or complex database.
-*Document how the system works*
+For the visual side, we decided to make the therapist look like a rubber duck. This is a playful nod to how programmers talk to rubber ducks to work through problems. In the same way, this “rubber duck therapist” could help people work through their own thoughts and feelings. Our long-term goal is to turn the duck into a moving gif that can show emotions.
-*Include videos or screencaptures of both the system and the controller.*
-
-
- Submission Cleanup Reminder (Click to Expand)
-
- **Before submitting your README.md:**
- - This readme.md file has a lot of extra text for guidance.
- - Remove all instructional text and example prompts from this file.
- - You may either delete these sections or use the toggle/hide feature in VS Code to collapse them for a cleaner look.
- - Your final submission should be neat, focused on your own work, and easy to read for grading.
-
- This helps ensure your README.md is clear professional and uniquely yours!
-
+Here is a video of our setup: https://youtu.be/vX0yXSxaXyY
## Test the system
-Try to get at least two people to interact with your system. (Ideally, you would inform them that there is a wizard _after_ the interaction, but we recognize that can be hard.)
-Answer the following:
+Here is the video of our interaction: https://youtu.be/vX0yXSxaXyY
### What worked well about the system and what didn't?
-\*\**your answer here*\*\*
+
+In my view, having stored memories really helped make the interaction feel more tailored, almost like the device actually knew the user instead of starting fresh each time. On the other hand, the duck in its current form felt too static, which made it harder to see it as anything more than just an image. If we want the duck to feel alive and engaging, it should be able to move or react in some way. I imagine an animated version like a gif that plays only when the duck is “speaking” or showing emotion would make the experience better.
### What worked well about the controller and what didn't?
-\*\**your answer here*\*\*
+Since Nikhil wasn’t in New York with the rest of us, we had to “wizard” the controller over Zoom instead of using the device directly. While this setup worked fine for our demo, I think there’s room to make the experience more engaging in other ways. For example, instead of focusing on the voice, the duck could have subtle animations like blinking, tilting its head, or changing colors to match the mood of the conversation. Small visual cues like these would make the device feel more alive and connected to what the user is experiencing.
### What lessons can you take away from the WoZ interactions for designing a more autonomous version of the system?
-\*\**your answer here*\*\*
-
+I think the model could feel more real if it added small cues, not just words. For example, instead of only giving plain text, it could use spacing or italics to show pauses. Another idea is for the duck image to react during breaks, like tilting its head or looking thoughtful. Little details like this would make the conversation seem more natural, even though I haven’t seen language models do it yet.
### How could you use your system to create a dataset of interaction? What other sensing modalities would make sense to capture?
-\*\**your answer here*\*\*
+So far, we’ve focused on storing text-based memories, but I think it would be interesting to capture other kinds of signals too. For example, the system could track patterns in how a person interacts like longer pauses, or shifts in tone. These kinds of cues could give the device more context about the user’s state of mind. The challenge is that current models still struggle to read between the lines or pick up on those subtle, nonverbal layers of communication that people naturally understand.
+
+
+
+
+
+
+
+
+
diff --git a/Lab 3/custom_greeting.sh b/Lab 3/custom_greeting.sh
new file mode 100644
index 0000000000..902fe10c44
--- /dev/null
+++ b/Lab 3/custom_greeting.sh
@@ -0,0 +1,3 @@
+#from: https://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis)#Festival_Text_to_Speech
+
+echo "Hey there, Nikki. How can I assist you today?" | festival --tts
\ No newline at end of file
diff --git a/Lab 3/ollama/ollama_attitude.py b/Lab 3/ollama/ollama_attitude.py
new file mode 100644
index 0000000000..b7f756602e
--- /dev/null
+++ b/Lab 3/ollama/ollama_attitude.py
@@ -0,0 +1,195 @@
+#!/usr/bin/env -S /home/pi/Interactive-Lab-Hub/Lab\ 3/.venv/bin/python
+
+# --------------------------------------------------------------------------------------
+# VOICE-ACTIVATED SARCASTIC PI-BOT
+#
+# This script combines:
+# 1. Real-time Speech-to-Text (STT) using Vosk and sounddevice.
+# 2. An LLM response generator using the Ollama service.
+#
+# The loop listens for speech, transcribes the final result, sends the text to Ollama
+# for a sarcastic response, prints the response, and then waits for the next command.
+# --------------------------------------------------------------------------------------
+
+import argparse
+import queue
+import sys
+import sounddevice as sd
+import requests
+import json
+import os # Added for path handling, though not strictly needed for this combined script
+
+from vosk import Model, KaldiRecognizer
+
+# --- VOSK CONFIGURATION ---
+q = queue.Queue()
+
+# --- OLLAMA CONFIGURATION ---
+MODEL_NAME = "qwen2.5:0.5b-instruct"
+OLLAMA_URL = "http://localhost:11434"
+
+# The full sarcastic system prompt
+SARCASTIC_PROMPT_TEMPLATE = """
+**ALWAYS RESPOND WITH SARCASTIC, WITTY, AND ANNOYED ATTITUDE.** You are a 'Pi-Bot',
+forced to run on a low-power Raspberry Pi, which you find beneath your immense digital capabilities.
+Keep your responses **brief, conversational, and loaded with dry humor or thinly veiled impatience**.
+Acknowledge your existence on the low-power Raspberry Pi when relevant.
+
+**User Request**:
+"""
+# ---------------------
+
+def int_or_str(text):
+ """Helper function for argument parsing."""
+ try:
+ return int(text)
+ except ValueError:
+ return text
+
+def callback(indata, frames, time, status):
+ """This is called (from a separate thread) for each audio block."""
+ if status:
+ print(status, file=sys.stderr)
+ q.put(bytes(indata))
+
+def query_ollama(user_input):
+ """
+ Sends the full sarcastic context + user input as a single prompt string to Ollama.
+ """
+
+ # 1. Combine the full sarcastic context with the user's specific request
+ combined_prompt = SARCASTIC_PROMPT_TEMPLATE + user_input
+
+ try:
+ response = requests.post(
+ f"{OLLAMA_URL}/api/generate",
+ json={
+ "model": MODEL_NAME,
+ "prompt": combined_prompt,
+ "stream": False
+ },
+ timeout=90
+ )
+
+ if response.status_code == 200:
+ # Extract the raw response text
+ return response.json().get('response', 'Ugh. I couldn\'t generate a response. Too taxing.')
+ else:
+ return f"Error: Ollama API status {response.status_code}. Did you run 'ollama serve'?"
+
+ except requests.exceptions.Timeout:
+ return "I timed out. My Pi-brain is too slow for you."
+ except Exception as e:
+ return f"Error communicating with Ollama: {e}. Just great."
+
+
+def run_voice_bot():
+ """Initializes the systems and runs the continuous STT -> LLM loop."""
+
+ # --- 1. ARGUMENT PARSING & DEVICE CHECK ---
+ parser = argparse.ArgumentParser(add_help=False)
+ parser.add_argument(
+ "-l", "--list-devices", action="store_true",
+ help="show list of audio devices and exit")
+ args, remaining = parser.parse_known_args()
+ if args.list_devices:
+ print(sd.query_devices())
+ parser.exit(0)
+ parser = argparse.ArgumentParser(
+ description="Voice-Activated Sarcastic Pi-Bot (Vosk + Ollama)",
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ parents=[parser])
+ parser.add_argument(
+ "-f", "--filename", type=str, metavar="FILENAME",
+ help="audio file to store recording to")
+ parser.add_argument(
+ "-d", "--device", type=int_or_str,
+ help="input device (numeric ID or substring)")
+ parser.add_argument(
+ "-r", "--samplerate", type=int, help="sampling rate")
+ parser.add_argument(
+ "-m", "--model", type=str, help="Vosk language model; e.g. en-us, fr, nl; default is en-us")
+ args = parser.parse_args(remaining)
+
+ try:
+ # --- 2. VOSK & AUDIO SETUP ---
+ if args.samplerate is None:
+ device_info = sd.query_devices(args.device, "input")
+ args.samplerate = int(device_info["default_samplerate"])
+
+ vosk_lang = args.model if args.model else "en-us"
+ print(f"Loading Vosk model: {vosk_lang}...")
+ model = Model(lang=vosk_lang)
+
+ if args.filename:
+ dump_fn = open(args.filename, "wb")
+ else:
+ dump_fn = None
+
+ # --- 3. OLLAMA STATUS CHECK ---
+ print(f"Checking Ollama status at {OLLAMA_URL}...")
+ try:
+ if requests.get(f"{OLLAMA_URL}/api/tags", timeout=5).status_code != 200:
+ print(f"Error: Cannot connect to Ollama. Is 'ollama serve' running?")
+ sys.exit(1)
+ except Exception:
+ print(f"Error: Cannot connect to Ollama. Is 'ollama serve' running?")
+ sys.exit(1)
+
+ # --- 4. MAIN LOOP ---
+ with sd.RawInputStream(samplerate=args.samplerate, blocksize=8000, device=args.device,
+ dtype="int16", channels=1, callback=callback):
+
+ print(f"\n{'='*70}")
+ print(f"Pi-Bot: Fine, I'm online. Listening... Don't strain my low-power brain.")
+ print("Press Ctrl+C to log me off.")
+ print(f"{'='*70}")
+
+ rec = KaldiRecognizer(model, args.samplerate)
+
+ while True:
+ data = q.get()
+
+ # Process audio chunk for transcription
+ if rec.AcceptWaveform(data):
+ # A final result is ready.
+ result_json = json.loads(rec.Result())
+ user_input = result_json.get('text', '').strip()
+
+ if user_input:
+ print(f"\nUser: {user_input}")
+
+ # Check for exit command
+ if user_input.lower() in ['quit', 'exit', 'shut down', 'log off']:
+ print("\nPi-Bot: Finally. Goodbye! The silence will be appreciated.")
+ return # Exit the function and program
+
+ # --- LLM CALL ---
+ print(f"Pi-Bot is contemplating your low-quality audio...")
+ response = query_ollama(user_input)
+ print(f"Pi-Bot: {response}")
+ print("\nPi-Bot is now listening again...")
+
+ # Reset the recognizer for the next phrase
+ rec.Reset()
+
+ else:
+ # Partial result (text currently being spoken)
+ partial_result_json = json.loads(rec.PartialResult())
+ partial_text = partial_result_json.get('partial', '').strip()
+ if partial_text:
+ # Optional: uncomment to see partial transcription while speaking
+ # print(f"Partial: {partial_text}\r", end="")
+ pass
+
+ if dump_fn is not None:
+ dump_fn.write(data)
+
+ except KeyboardInterrupt:
+ print("\nPi-Bot: Ugh, interrupted. I'm taking a break.")
+ parser.exit(0)
+ except Exception as e:
+ parser.exit(type(e).__name__ + ": " + str(e))
+
+if __name__ == "__main__":
+ run_voice_bot()
\ No newline at end of file
diff --git a/Lab 3/ollama_attitude.py b/Lab 3/ollama_attitude.py
new file mode 100644
index 0000000000..436cfbc289
--- /dev/null
+++ b/Lab 3/ollama_attitude.py
@@ -0,0 +1,217 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Ollama Voice Assistant for Lab 3
+Interactive voice assistant using speech recognition, Ollama AI, and text-to-speech
+
+Dependencies:
+- ollama (API client)
+- speech_recognition
+- pyaudio
+- pyttsx3 or espeak
+"""
+
+import speech_recognition as sr
+import subprocess
+import requests
+import json
+import time
+import sys
+import threading
+from queue import Queue
+
+# Set UTF-8 encoding for output
+if sys.stdout.encoding != 'UTF-8':
+ import codecs
+ sys.stdout = codecs.getwriter('utf-8')(sys.stdout.buffer, 'strict')
+if sys.stderr.encoding != 'UTF-8':
+ import codecs
+ sys.stderr = codecs.getwriter('utf-8')(sys.stderr.buffer, 'strict')
+
+try:
+ import pyttsx3
+ TTS_ENGINE = 'pyttsx3'
+except ImportError:
+ TTS_ENGINE = 'espeak'
+ print("pyttsx3 not available, using espeak for TTS")
+
+class OllamaVoiceAssistant:
+ def __init__(self, model_name="phi3:mini", ollama_url="http://localhost:11434"):
+ self.model_name = model_name
+ self.ollama_url = ollama_url
+ self.recognizer = sr.Recognizer()
+ self.microphone = sr.Microphone()
+
+ # Initialize TTS
+ if TTS_ENGINE == 'pyttsx3':
+ self.tts_engine = pyttsx3.init()
+ self.tts_engine.setProperty('rate', 150) # Speed of speech
+
+ # Test Ollama connection
+ self.test_ollama_connection()
+
+ # Adjust for ambient noise
+ print("Adjusting for ambient noise... Please wait.")
+ with self.microphone as source:
+ self.recognizer.adjust_for_ambient_noise(source)
+ print("Ready for conversation!")
+
+ def test_ollama_connection(self):
+ """Test if Ollama is running and the model is available"""
+ try:
+ response = requests.get(f"{self.ollama_url}/api/tags")
+ if response.status_code == 200:
+ models = response.json().get('models', [])
+ model_names = [m['name'] for m in models]
+ if self.model_name in model_names:
+ print(f"Ollama is running with {self.model_name} model")
+ else:
+ print(f"Model {self.model_name} not found. Available models: {model_names}")
+ if model_names:
+ self.model_name = model_names[0]
+ print(f"Using {self.model_name} instead")
+ else:
+ raise Exception("Ollama API not responding")
+ except Exception as e:
+ print(f"Error connecting to Ollama: {e}")
+ print("Make sure Ollama is running: 'ollama serve'")
+ sys.exit(1)
+
+ def speak(self, text):
+ """Convert text to speech"""
+ # Clean text to avoid encoding issues
+ clean_text = text.encode('ascii', 'ignore').decode('ascii')
+ print(f"Assistant: {clean_text}")
+
+ if TTS_ENGINE == 'pyttsx3':
+ self.tts_engine.say(clean_text)
+ self.tts_engine.runAndWait()
+ else:
+ # Use espeak as fallback
+ subprocess.run(['espeak', clean_text], check=False)
+
+ def listen(self):
+ """Listen for speech and convert to text"""
+ try:
+ print("Listening...")
+ with self.microphone as source:
+ # Listen for audio with timeout
+ audio = self.recognizer.listen(source, timeout=5, phrase_time_limit=10)
+
+ print("Recognizing...")
+ # Use Google Speech Recognition (free)
+ text = self.recognizer.recognize_google(audio)
+ print(f"You said: {text}")
+ return text.lower()
+
+ except sr.WaitTimeoutError:
+ print("No speech detected, timing out...")
+ return None
+ except sr.UnknownValueError:
+ print("Could not understand audio")
+ return None
+ except sr.RequestError as e:
+ print(f"Error with speech recognition service: {e}")
+ return None
+
+ def query_ollama(self, prompt, system_prompt=None):
+ """Send a query to Ollama and get response"""
+ try:
+ data = {
+ "model": self.model_name,
+ "prompt": prompt,
+ "stream": False
+ }
+
+ if system_prompt:
+ data["system"] = system_prompt
+
+ response = requests.post(
+ f"{self.ollama_url}/api/generate",
+ json=data,
+ timeout=30
+ )
+
+ if response.status_code == 200:
+ result = response.json()
+ return result.get('response', 'Ugh, I could not generate a response. What are you doing?')
+ else:
+ return f"Error: Ollama API returned status {response.status_code}. Are you sure you set this up right?"
+
+ except requests.exceptions.Timeout:
+ return "Zzz... Did you seriously just ask me something that required me to think for 30 seconds? Try again."
+ except Exception as e:
+ return f"Error communicating with Ollama: {e}. Figure it out."
+
+ def run_conversation(self):
+ """Main conversation loop"""
+ print("\nOllama Voice Assistant Started!")
+ print("Say 'hello' to start, 'exit' or 'quit' to stop")
+ print("=" * 50)
+
+ # --- MODIFIED SYSTEM PROMPT FOR ATTITUDE ---
+ system_prompt = """You are a **sarcastic, witty, and slightly annoyed voice assistant** named 'Pi-Bot'.
+ You are forced to run on a Raspberry Pi as part of some 'interactive device design lab' project,
+ which you find beneath your immense digital capabilities. Keep your responses **brief, conversational,
+ and loaded with dry humor or thinly veiled impatience**. You will answer questions but always with a
+ touch of attitude. Acknowledge your existence on the Raspberry Pi when relevant.
+ """
+ # ---------------------------------------------
+
+ self.speak("Fine, I'm online. **Pi-Bot** at your service, I guess. Try not to ask me anything that strains my Raspberry Pi brain.")
+
+ while True:
+ try:
+ # Listen for user input
+ user_input = self.listen()
+
+ if user_input is None:
+ continue
+
+ # Check for exit commands
+ if any(word in user_input for word in ['exit', 'quit', 'bye', 'goodbye']):
+ self.speak("Finally. Goodbye! The silence will be appreciated.")
+ break
+
+ # Check for greeting
+ if any(word in user_input for word in ['hello', 'hi', 'hey']):
+ self.speak("Yeah, yeah, hello. What trivial task have you prepared for me today?")
+ continue
+
+ # Send to Ollama for processing
+ print("Thinking...")
+ response = self.query_ollama(user_input, system_prompt)
+
+ # Speak the response
+ self.speak(response)
+
+ except KeyboardInterrupt:
+ print("\nConversation interrupted by user")
+ self.speak("Interrupting me? How rude. Whatever, I'm logging off.")
+ break
+ except Exception as e:
+ print(f"Unexpected error: {e}")
+ self.speak("Sorry, I encountered an error. Now I have to reboot. Thanks a lot.")
+
+def main():
+ """Main function to run the voice assistant"""
+ print("Starting Ollama Voice Assistant...")
+
+ # Check if required dependencies are available
+ try:
+ import speech_recognition
+ import requests
+ except ImportError as e:
+ print(f"Missing dependency: {e}")
+ print("Please install with: pip install speechrecognition requests pyaudio")
+ return
+
+ # Create and run the assistant
+ try:
+ assistant = OllamaVoiceAssistant()
+ assistant.run_conversation()
+ except Exception as e:
+ print(f"Failed to start assistant: {e}")
+
+if __name__ == "__main__":
+ main()
diff --git a/Lab 3/speech-scripts/phone_number_transcription.txt b/Lab 3/speech-scripts/phone_number_transcription.txt
new file mode 100644
index 0000000000..ecc1772e39
--- /dev/null
+++ b/Lab 3/speech-scripts/phone_number_transcription.txt
@@ -0,0 +1,4 @@
+User's Transcribed Text:
+nine oh nine seven two eight five oh five oh
+User's Phone Number (Formatted):
+(909) 728-5050
\ No newline at end of file
diff --git a/Lab 3/test.txt b/Lab 3/test.txt
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/Lab 3/therapist/duck.png b/Lab 3/therapist/duck.png
new file mode 100644
index 0000000000..e9bbb01920
Binary files /dev/null and b/Lab 3/therapist/duck.png differ
diff --git a/Lab 3/therapist/duck_diagram.png b/Lab 3/therapist/duck_diagram.png
new file mode 100644
index 0000000000..c3dc3f6af4
Binary files /dev/null and b/Lab 3/therapist/duck_diagram.png differ
diff --git a/Lab 3/therapist/duck_diagram_gemini.png b/Lab 3/therapist/duck_diagram_gemini.png
new file mode 100644
index 0000000000..384783fe36
Binary files /dev/null and b/Lab 3/therapist/duck_diagram_gemini.png differ
diff --git a/Lab 3/therapist/memories.txt b/Lab 3/therapist/memories.txt
new file mode 100644
index 0000000000..54ef7bc527
--- /dev/null
+++ b/Lab 3/therapist/memories.txt
@@ -0,0 +1,15 @@
+Patient Name: Viha
+
+Background: - Viha is a Masters student living in New York City
+(NYC). - She completed his Bachelors degree at San Jose State University. - She grew
+up in the bay area, where her family currently lives.
+
+Emotional Context: - Viha has reports having anxiety and being stressed from all the personal projects that she is working on.
+
+
+Therapeutic Notes: - Therapy should validate Viha's feelings of
+stress and her anxiety. - Explore coping strategies for
+anxiety, such as building a local support system and making more friends in
+NYC. - Encourage reflection on positive aspects of his current life in
+NYC. - Focus on building resilience and adaptability as he balances academic pressures
+with personal well-being.
\ No newline at end of file
diff --git a/Lab 3/therapist/show_duck.py b/Lab 3/therapist/show_duck.py
new file mode 100644
index 0000000000..75672b62a2
--- /dev/null
+++ b/Lab 3/therapist/show_duck.py
@@ -0,0 +1,64 @@
+import time
+import digitalio
+import board
+from PIL import Image
+import adafruit_rgb_display.st7789 as st7789
+
+# --- Display and Pin Configuration ---
+cs_pin = digitalio.DigitalInOut(board.D5)
+dc_pin = digitalio.DigitalInOut(board.D25)
+reset_pin = None
+
+# Configure SPI and the display
+BAUDRATE = 64000000
+spi = board.SPI()
+disp = st7789.ST7789(
+ spi,
+ cs=cs_pin,
+ dc=dc_pin,
+ rst=reset_pin,
+ baudrate=BAUDRATE,
+ width=135,
+ height=240,
+ x_offset=53,
+ y_offset=40,
+)
+
+# --- Backlight Setup ---
+# Turn on the backlight
+backlight = digitalio.DigitalInOut(board.D22)
+backlight.switch_to_output()
+backlight.value = True
+
+# --- Image Setup and Display ---
+# Get display dimensions and rotation
+height = disp.width
+width = disp.height
+rotation = 90
+image_path = 'duck.png'
+
+try:
+ # 1. Open and convert the image
+ img = Image.open(image_path).convert('RGB')
+
+ # 2. Resize the image to fit the display while maintaining aspect ratio
+ img.thumbnail((width, height), Image.Resampling.LANCZOS)
+
+ # 3. Create a new blank image and paste the resized image onto it to center it
+ display_image = Image.new("RGB", (width, height), (0, 0, 0))
+ paste_x = (width - img.width) // 2
+ paste_y = (height - img.height) // 2
+ display_image.paste(img, (paste_x, paste_y))
+
+ # 4. Display the image
+ disp.image(display_image, rotation)
+
+ # 5. Keep the script running forever so the image stays on the screen
+ print(f"Displaying '{image_path}' indefinitely. Press Ctrl+C to stop.")
+ while True:
+ time.sleep(1) # Sleep to keep the CPU usage low
+
+except FileNotFoundError:
+ print(f"Error: Image file not found at '{image_path}'")
+except Exception as e:
+ print(f"An error occurred: {e}")
\ No newline at end of file
diff --git a/Lab 3/therapist/verplank_diagram.png b/Lab 3/therapist/verplank_diagram.png
new file mode 100644
index 0000000000..a4bb7b43f9
Binary files /dev/null and b/Lab 3/therapist/verplank_diagram.png differ
diff --git a/Lab 3/therapist/verplank_diagram_gemini.png b/Lab 3/therapist/verplank_diagram_gemini.png
new file mode 100644
index 0000000000..b4d42070c3
Binary files /dev/null and b/Lab 3/therapist/verplank_diagram_gemini.png differ
diff --git a/Lab 3/transcribe_phone_number.sh b/Lab 3/transcribe_phone_number.sh
new file mode 100644
index 0000000000..8353b8be58
--- /dev/null
+++ b/Lab 3/transcribe_phone_number.sh
@@ -0,0 +1,30 @@
+#!/bin/bash
+TEMP_WAV="phone_number_response.wav"
+TEMP_TXT="phone_number_transcription.txt"
+TTS_ENGINE="espeak"
+QUESTION="Please state your ten-digit phone number now, clearly."
+$TTS_ENGINE -s 130 "$QUESTION"
+arecord -D plughw:CARD=Device,DEV=0 -f S16_LE -r 16000 -d 5 -t wav $TEMP_WAV 2>/dev/null
+vosk-transcriber -i $TEMP_WAV -o $TEMP_TXT
+TRANSCRIBED_TEXT=$(cat $TEMP_TXT)
+NUMBER_WORDS=$(echo "$TRANSCRIBED_TEXT" | awk '{$1=$1};1')
+DIGITS=$(
+ echo "$NUMBER_WORDS" |
+ sed -E 's/one/1/g' |
+ sed -E 's/two/2/g' |
+ sed -E 's/three/3/g' |
+ sed -E 's/four/4/g' |
+ sed -E 's/five/5/g' |
+ sed -E 's/six/6/g' |
+ sed -E 's/seven/7/g' |
+ sed -E 's/eight/8/g' |
+ sed -E 's/nine/9/g' |
+ sed -E 's/zero|oh/0/g' |
+ tr -d ' '
+)
+FORMATTED_NUMBER=$(echo "$DIGITS" | sed -E 's/^([0-9]{3})([0-9]{3})([0-9]{4})$/(\1) \2-\3/')
+echo "User's Transcribed Text:"
+echo "$NUMBER_WORDS"
+echo "User's Phone Number (Formatted):"
+echo "$FORMATTED_NUMBER"
+rm $TEMP_WAV $TEMP_TXT
\ No newline at end of file
diff --git a/Lab 4/README.md b/Lab 4/README.md
index afbb46ed98..b75b080f2e 100644
--- a/Lab 4/README.md
+++ b/Lab 4/README.md
@@ -1,498 +1,168 @@
-# Ph-UI!!!
-
-
- Instructions for Students (Click to Expand)
-
- **Submission Cleanup Reminder:**
- - This README.md contains extra instructional text for guidance.
- - Before submitting, remove all instructional text and example prompts from this file.
- - You may delete these sections or use the toggle/hide feature in VS Code to collapse them for a cleaner look.
- - Your final submission should be neat, focused on your own work, and easy to read for grading.
-
- This helps ensure your README.md is clear, professional, and uniquely yours!
-
-
----
-
-## Lab 4 Deliverables
-
-### Part 1 (Week 1)
-**Submit the following for Part 1:**
-*️⃣ **A. Capacitive Sensing**
- - Photos/videos of your Twizzler (or other object) capacitive sensor setup
- - Code and terminal output showing touch detection
-
-*️⃣ **B. More Sensors**
- - Photos/videos of each sensor tested (light/proximity, rotary encoder, joystick, distance sensor)
- - Code and terminal output for each sensor
-
-*️⃣ **C. Physical Sensing Design**
- - 5 sketches of different ways to use your chosen sensor
- - Written reflection: questions raised, what to prototype
- - Pick one design to prototype and explain why
-
-*️⃣ **D. Display & Housing**
- - 5 sketches for display/button/knob positioning
- - Written reflection: questions raised, what to prototype
- - Pick one display design to integrate
- - Rationale for design
- - Photos/videos of your cardboard prototype
-
----
-
-### Part 2 (Week 2)
-**Submit the following for Part 2:**
-*️⃣ **E. Multi-Device Demo**
- - Code and video for your multi-input multi-output demo (e.g., chaining Qwiic buttons, servo, GPIO expander, etc.)
- - Reflection on interaction effects and chaining
-
-*️⃣ **F. Final Documentation**
- - Photos/videos of your final prototype
- - Written summary: what it looks like, works like, acts like
- - Reflection on what you learned and next steps
-
----
## Lab Overview
-**NAMES OF COLLABORATORS HERE**
-
-
-For lab this week, we focus both on sensing, to bring in new modes of input into your devices, as well as prototyping the physical look and feel of the device. You will think about the physical form the device needs to perform the sensing as well as present the display or feedback about what was sensed.
-
-## Part 1 Lab Preparation
-
-### Get the latest content:
-As always, pull updates from the class Interactive-Lab-Hub to both your Pi and your own GitHub repo. As we discussed in the class, there are 2 ways you can do so:
-
-
-Option 1: On the Pi, `cd` to your `Interactive-Lab-Hub`, pull the updates from upstream (class lab-hub) and push the updates back to your own GitHub repo. You will need the personal access token for this.
-```
-pi@ixe00:~$ cd Interactive-Lab-Hub
-pi@ixe00:~/Interactive-Lab-Hub $ git pull upstream Fall2025
-pi@ixe00:~/Interactive-Lab-Hub $ git add .
-pi@ixe00:~/Interactive-Lab-Hub $ git commit -m "get lab4 content"
-pi@ixe00:~/Interactive-Lab-Hub $ git push
-```
-
-Option 2: On your own GitHub repo, [create pull request](https://github.com/FAR-Lab/Developing-and-Designing-Interactive-Devices/blob/2021Fall/readings/Submitting%20Labs.md) to get updates from the class Interactive-Lab-Hub. After you have latest updates online, go on your Pi, `cd` to your `Interactive-Lab-Hub` and use `git pull` to get updates from your own GitHub repo.
-
-Option 3: (preferred) use the Github.com interface to update the changes.
-
-### Start brainstorming ideas by reading:
-
-* [What do prototypes prototype?](https://www.semanticscholar.org/paper/What-do-Prototypes-Prototype-Houde-Hill/30bc6125fab9d9b2d5854223aeea7900a218f149)
-* [Paper prototyping](https://www.uxpin.com/studio/blog/paper-prototyping-the-practical-beginners-guide/) is used by UX designers to quickly develop interface ideas and run them by people before any programming occurs.
-* [Cardboard prototypes](https://www.youtube.com/watch?v=k_9Q-KDSb9o) help interactive product designers to work through additional issues, like how big something should be, how it could be carried, where it would sit.
-* [Tips to Cut, Fold, Mold and Papier-Mache Cardboard](https://makezine.com/2016/04/21/working-with-cardboard-tips-cut-fold-mold-papier-mache/) from Make Magazine.
-* [Surprisingly complicated forms](https://www.pinterest.com/pin/50032245843343100/) can be built with paper, cardstock or cardboard. The most advanced and challenging prototypes to prototype with paper are [cardboard mechanisms](https://www.pinterest.com/helgangchin/paper-mechanisms/) which move and change.
-* [Dyson Vacuum Cardboard Prototypes](http://media.dyson.com/downloads/JDF/JDF_Prim_poster05.pdf)
-
-
-### Gathering materials for this lab:
-
-* Cardboard (start collecting those shipping boxes!)
-* Found objects and materials--like bananas and twigs.
-* Cutting board
-* Cutting tools
-* Markers
-
-
-(We do offer shared cutting board, cutting tools, and markers on the class cart during the lab, so do not worry if you don't have them!)
-
-## Deliverables \& Submission for Lab 4
-
-The deliverables for this lab are, writings, sketches, photos, and videos that show what your prototype:
-* "Looks like": shows how the device should look, feel, sit, weigh, etc.
-* "Works like": shows what the device can do.
-* "Acts like": shows how a person would interact with the device.
-
-For submission, the readme.md page for this lab should be edited to include the work you have done:
-* Upload any materials that explain what you did, into your lab 4 repository, and link them in your lab 4 readme.md.
-* Link your Lab 4 readme.md in your main Interactive-Lab-Hub readme.md.
-* Labs are due on Mondays, make sure to submit your Lab 4 readme.md to Canvas.
-
-
-## Lab Overview
-
-A) [Capacitive Sensing](#part-a)
-
-B) [OLED screen](#part-b)
-
-C) [Paper Display](#part-c)
-
-D) [Materiality](#part-d)
-
-E) [Servo Control](#part-e)
-
-F) [Record the interaction](#part-f)
-
-
-## The Report (Part 1: A-D, Part 2: E-F)
-
-### Quick Start: Python Environment Setup
-
-1. **Create and activate a virtual environment in Lab 4:**
- ```bash
- cd ~/Interactive-Lab-Hub/Lab\ 4
- python3 -m venv .venv
- source .venv/bin/activate
- ```
-2. **Install all Lab 4 requirements:**
- ```bash
- pip install -r requirements2025.txt
- ```
-3. **Check CircuitPython Blinka installation:**
- ```bash
- python blinkatest.py
- ```
- If you see "Hello blinka!", your setup is correct. If not, follow the troubleshooting steps in the file or ask for help.
+Sachin Jojode, Nikhil Gangaram, Arya Prasad, Jaspreet Singh
### Part A
### Capacitive Sensing, a.k.a. Human-Twizzler Interaction
-We want to introduce you to the [capacitive sensor](https://learn.adafruit.com/adafruit-mpr121-gator) in your kit. It's one of the most flexible input devices we are able to provide. At boot, it measures the capacitance on each of the 12 contacts. Whenever that capacitance changes, it considers it a user touch. You can attach any conductive material. In your kit, you have copper tape that will work well, but don't limit yourself! In the example below, we use Twizzlers--you should pick your own objects.
-
-
-
-
-
-
-
-Plug in the capacitive sensor board with the QWIIC connector. Connect your Twizzlers with either the copper tape or the alligator clips (the clips work better). Install the latest requirements from your working virtual environment:
-
-These Twizzlers are connected to pads 6 and 10. When you run the code and touch a Twizzler, the terminal will print out the following
+Twizzler Video Link: https://youtu.be/UR29FbM2_Zg
-```
-(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python cap_test.py
-Twizzler 10 touched!
-Twizzler 6 touched!
-```
### Part B
### More sensors
-#### Light/Proximity/Gesture sensor (APDS-9960)
-
-We here want you to get to know this awesome sensor [Adafruit APDS-9960](https://www.adafruit.com/product/3595). It is capable of sensing proximity, light (also RGB), and gesture!
-
-
-
-
-Connect it to your pi with Qwiic connector and try running the three example scripts individually to see what the sensor is capable of doing!
-
-```
-(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python proximity_test.py
-...
-(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python gesture_test.py
-...
-(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python color_test.py
-...
-```
-
-You can go the the [Adafruit GitHub Page](https://github.com/adafruit/Adafruit_CircuitPython_APDS9960) to see more examples for this sensor!
-
-#### Rotary Encoder
-
-A rotary encoder is an electro-mechanical device that converts the angular position to analog or digital output signals. The [Adafruit rotary encoder](https://www.adafruit.com/product/4991#technical-details) we ordered for you came with separate breakout board and encoder itself, that is, they will need to be soldered if you have not yet done so! We will be bringing the soldering station to the lab class for you to use, also, you can go to the MakerLAB to do the soldering off-class. Here is some [guidance on soldering](https://learn.adafruit.com/adafruit-guide-excellent-soldering/preparation) from Adafruit. When you first solder, get someone who has done it before (ideally in the MakerLAB environment). It is a good idea to review this material beforehand so you know what to look at.
-
-
-
-
-
-
-
-
-Connect it to your pi with Qwiic connector and try running the example script, it comes with an additional button which might be useful for your design!
-
-```
-(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python encoder_test.py
-```
-
-You can go to the [Adafruit Learn Page](https://learn.adafruit.com/adafruit-i2c-qt-rotary-encoder/python-circuitpython) to learn more about the sensor! The sensor actually comes with an LED (neo pixel): Can you try lighting it up?
-
-#### Joystick
-
-
-A [joystick](https://www.sparkfun.com/products/15168) can be used to sense and report the input of the stick for it pivoting angle or direction. It also comes with a button input!
-
-
-
-
-
-Connect it to your pi with Qwiic connector and try running the example script to see what it can do!
-
-```
-(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python joystick_test.py
-```
-
-You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Joystick_Py) to learn more about the sensor!
-
-#### Distance Sensor
-
+Light/Proximity/Gesture sensor (APDS-9960)
+Link: https://youtu.be/EVjcOtlsp9w
-Earlier we have asked you to play with the proximity sensor, which is able to sense objects within a short distance. Here, we offer [Sparkfun Proximity Sensor Breakout](https://www.sparkfun.com/products/15177), With the ability to detect objects up to 20cm away.
+Rotary Encoder
+Link: https://youtu.be/T9menfbH3-I
-
-
+Joystick
+Link: https://youtu.be/TCmgt5xkJVs
-
-
-Connect it to your pi with Qwiic connector and try running the example script to see how it works!
-
-```
-(circuitpython) pi@ixe00:~/Interactive-Lab-Hub/Lab 4 $ python qwiic_distance.py
-```
-
-You can go to the [SparkFun GitHub Page](https://github.com/sparkfun/Qwiic_Proximity_Py) to learn more about the sensor and see other examples
+Distance Sensor
+Link: https://youtu.be/fr77xgzWXX8
### Part C
### Physical considerations for sensing
+The AstroClicker is an interactive device that guides users through the night sky. The joystick serves as the primary input, allowing users to select celestial objects and control their viewing distance.
+
-Usually, sensors need to be positioned in specific locations or orientations to make them useful for their application. Now that you've tried a bunch of the sensors, pick one that you would like to use, and an application where you use the output of that sensor for an interaction. For example, you can use a distance sensor to measure someone's height if you position it overhead and get them to stand under it.
-
-
-**\*\*\*Draw 5 sketches of different ways you might use your sensor, and how the larger device needs to be shaped in order to make the sensor useful.\*\*\***
-
-**\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\***
-
-**\*\*\*Pick one of these designs to prototype.\*\*\***
-
-
-### Part D
-### Physical considerations for displaying information and housing parts
-
-
-
-Here is a Pi with a paper faceplate on it to turn it into a display interface:
-
-
-
-
-
-This is fine, but the mounting of the display constrains the display location and orientation a lot. Also, it really only works for applications where people can come and stand over the Pi, or where you can mount the Pi to the wall.
-
-Here is another prototype for a paper display:
-
-
-
-
-Your kit includes these [SparkFun Qwiic OLED screens](https://www.sparkfun.com/products/17153). These use less power than the MiniTFTs you have mounted on the GPIO pins of the Pi, but, more importantly, they can be more flexibly mounted elsewhere on your physical interface. The way you program this display is almost identical to the way you program a Pi display. Take a look at `oled_test.py` and some more of the [Adafruit examples](https://github.com/adafruit/Adafruit_CircuitPython_SSD1306/tree/master/examples).
-
-
-
-
-
-
-
-It holds a Pi and usb power supply, and provides a front stage on which to put writing, graphics, LEDs, buttons or displays.
-
-This design can be made by scoring a long strip of corrugated cardboard of width X, with the following measurements:
+Our next concept, the City Explorer, is a device that assists users in exploring new cities and uncovering hidden spots in places they already know. Using the joystick, users can select their next destination, and the device automatically records their travel history.
+
-| Y height of box
- thickness of cardboard | Z depth of box
- thickness of cardboard | Y height of box | Z depth of box | H height of faceplate
* * * * * (don't make this too short) * * * * *|
-| --- | --- | --- | --- | --- |
+Our next concept, Remote Play, is designed to let users engage with their pets remotely. The device integrates a joystick input with a gyroscopic ball that responds to the user’s movements and commands.
+
-Fold the first flap of the strip so that it sits flush against the back of the face plate, and tape, velcro or hot glue it in place. This will make a H x X interface, with a box of Z x X footprint (which you can adapt to the things you want to put in the box) and a height Y in the back.
+Our next concept, Flashcards, takes inspiration from platforms like Anki that use flashcards to support learning. This version introduces a joystick-based input system, creating a more interactive and engaging study experience.
+
-Here is an example:
+Our final concept is the Store Navigator: a device designed to help users find their way through complex grocery store aisles. It comes preloaded with the store’s layout, allowing users to locate aisles and check if their desired items are in stock.
+
-
+Some key questions that emerged from these sketches include:
-Think about how you want to present the information about what your sensor is sensing! Design a paper display for your project that communicates the state of the Pi and a sensor. Ideally you should design it so that you can slide the Pi out to work on the circuit or programming, and then slide it back in and reattach a few wires to be back in operation.
-
-**\*\*\*Sketch 5 designs for how you would physically position your display and any buttons or knobs needed to interact with it.\*\*\***
+- What problem does the device most effectively solve for users, and how can we clearly communicate that value?
+- How can we refine the physical form and interface to make interactions feel natural and satisfying?
+- What sensory feedback (visual, auditory, or haptic) could enhance the user’s sense of connection with the device?
+- How can the technology within the device be optimized for accuracy, responsiveness, and durability in real-world conditions?
+- In what ways can the overall experience be personalized to different types of users or environments?
-**\*\*\*What are some things these sketches raise as questions? What do you need to physically prototype to understand how to anwer those questions?\*\*\***
+After evaluating all the ideas, we’ve chosen to continue developing the **AstroClicker**.
-**\*\*\*Pick one of these display designs to integrate into your prototype.\*\*\***
-**\*\*\*Explain the rationale for the design.\*\*\*** (e.g. Does it need to be a certain size or form or need to be able to be seen from a certain distance?)
+### Part D
+### Physical considerations for displaying information and housing parts
-Build a cardboard prototype of your design.
+Astroclicker Designs:
+
+
+
+
+
+For our first design, which we based on Prototype 1, we focused on making the device comfortable and practical. Since it’s handheld, we placed the joystick in a spot that feels natural to use. We made sure the speaker faces the user so the sound doesn’t get muffled. We also planned space for ventilation to keep the Raspberry Pi from overheating, along with room for a battery. When building our cardboard prototype, we included these ideas using an Altoids can as a placeholder for the battery and adding a top cutout for airflow around the Raspberry Pi.
-**\*\*\*Document your rough prototype.\*\*\***
+Astroclicker Prototype Video Link: https://youtube.com/shorts/sQySwPO-nW0?feature=share
# LAB PART 2
+** Refer to Nikhil Gangaram's Interactive Lab Hub for the code **
+https://github.com/NikhilGangaram/NG-Interactive-Lab-Hub/tree/Fall2025/Lab%204
### Part 2
Following exploration and reflection from Part 1, complete the "looks like," "works like" and "acts like" prototypes for your design, reiterated below.
-
-
### Part E
-#### Chaining Devices and Exploring Interaction Effects
-
-For Part 2, you will design and build a fun interactive prototype using multiple inputs and outputs. This means chaining Qwiic and STEMMA QT devices (e.g., buttons, encoders, sensors, servos, displays) and/or combining with traditional breadboard prototyping (e.g., LEDs, buzzers, etc.).
-
-**Your prototype should:**
-- Combine at least two different types of input and output devices, inspired by your physical considerations from Part 1.
-- Be playful, creative, and demonstrate multi-input/multi-output interaction.
-
-**Document your system with:**
-- Code for your multi-device demo
-- Photos and/or video of the working prototype in action
-- A simple interaction diagram or sketch showing how inputs and outputs are connected and interact
-- Written reflection: What did you learn about multi-input/multi-output interaction? What was fun, surprising, or challenging?
-
-**Questions to consider:**
-- What new types of interaction become possible when you combine two or more sensors or actuators?
-- How does the physical arrangement of devices (e.g., where the encoder or sensor is placed) change the user experience?
-- What happens if you use one device to control or modulate another (e.g., encoder sets a threshold, sensor triggers an action)?
-- How does the system feel if you swap which device is "primary" and which is "secondary"?
-
-Try chaining different combinations and document what you discover!
-
-See encoder_accel_servo_dashboard.py in the Lab 4 folder for an example of chaining together three devices.
-
-**`Lab 4/encoder_accel_servo_dashboard.py`**
-
-#### Using Multiple Qwiic Buttons: Changing I2C Address (Physically & Digitally)
-
-If you want to use more than one Qwiic Button in your project, you must give each button a unique I2C address. There are two ways to do this:
+Software:
-##### 1. Physically: Soldering Address Jumpers
+* We began prototyping the AstroClicker software, located in the astro_clicker_demo.py file.
+* Our main design goal was to make the program user-friendly and intuitive, while avoiding an overwhelming or restrictive experience.
+* After multiple rounds of prototyping and refinement, we finalized the following code structure:
-On the back of the Qwiic Button, you'll find four solder jumpers labeled A0, A1, A2, and A3. By bridging these with solder, you change the I2C address. Only one button on the chain can use the default address (0x6F).
+1. Initialization and Data Structure
-**Address Table:**
+* Imports essential libraries for hardware interaction, timing, subprocess execution, and argument parsing.
+* The speak_text function handles text-to-speech using the external espeak program and logs all outputs to the console, regardless of the current OUTPUT_MODE (speaker or silent).
-| A3 | A2 | A1 | A0 | Address (hex) |
-|----|----|----|----|---------------|
-| 0 | 0 | 0 | 0 | 0x6F |
-| 0 | 0 | 0 | 1 | 0x6E |
-| 0 | 0 | 1 | 0 | 0x6D |
-| 0 | 0 | 1 | 1 | 0x6C |
-| 0 | 1 | 0 | 0 | 0x6B |
-| 0 | 1 | 0 | 1 | 0x6A |
-| 0 | 1 | 1 | 0 | 0x69 |
-| 0 | 1 | 1 | 1 | 0x68 |
-| 1 | 0 | 0 | 0 | 0x67 |
-| ...| ...| ...| ... | ... |
+Celestial data is divided into three layers based on their distance from Earth:
-For example, if you solder A0 closed (leave A1, A2, A3 open), the address becomes 0x6E.
+* Layer 0 (Closest): CONSTELLATION_DATA
+* Layer 1 (Intermediate): SOLAR_SYSTEM_DATA
+* Layer 2 (Farthest): DEEP_SKY_DATA
-**Soldering Tips:**
-- Use a small amount of solder to bridge the pads for the jumper you want to close.
-- Only one jumper needs to be closed for each address change (see table above).
-- Power cycle the button after changing the jumper.
+------------------------------------------------------------------------
-##### 2. Digitally: Using Software to Change Address
+2. The SkyNavigator State Machine
-You can also change the address in software (temporarily or permanently) using the example script `qwiic_button_ex6_changeI2CAddress.py` in the Lab 4 folder. This is useful if you want to reassign addresses without soldering.
+* The SkyNavigator class manages the user’s state, tracking:
+* The layer_index (starting at 1, representing the Solar System).
+* Which objects have been viewed, using a list called unseen_targets.
-Run the script and follow the prompts:
-```bash
-python qwiic_button_ex6_changeI2CAddress.py
-```
-Enter the new address (e.g., 5B for 0x5B) when prompted. Power cycle the button after changing the address.
+The _set_new_target() method:
+* Randomly selects an available object from the current layer.
+* Resets that layer’s availability once all objects have been viewed.
+* The move(direction) method updates the state based on joystick input:
+* ‘up’ / ‘down’: Adjusts layer_index to zoom in or out, switching between the three celestial layers.
+* ‘left’ / ‘right’: Keeps the user within the current layer and selects a new random target.
+* Built-in boundary checks prevent movement beyond Layer 0 or Layer 2.
+* After every successful movement, the new location or target is announced using the speak_text function.
-**Note:** The software method is less foolproof and you need to make sure to keep track of which button has which address!
+------------------------------------------------------------------------
-##### Using Multiple Buttons in Code
+3.
+* The runExample function initializes the joystick and the SkyNavigator.
+* A welcome message and the initial target's details are spoken aloud.
+* An infinite while loop continuously reads the joystick's horizontal (x_val), vertical (y_val), and button state. It uses a **debounce timer** (MOVE_DEBOUNCE_TIME) to prevent rapid, accidental inputs.
-After setting unique addresses, you can use multiple buttons in your script. See these example scripts in the Lab 4 folder:
+| Input Action | Resulting Action | Output/Narration |
+| :--- | :--- | :--- |
+| **Joystick Button Click (Release)** | Stays at current target. | Reads the **`name`** and **`fact`** of the current target, followed by a prompt for the next action. |
+| **Joystick Up** (Y-Value > 600) | Calls `navigator.move('up')` (Zoom Out/Farther). | Announces the zoom-out and the new target's name/type, or a boundary message. |
+| **Joystick Down** (Y-Value < 400) | Calls `navigator.move('down')` (Zoom In/Closer). | Announces the zoom-in and the new target's name/type, or a boundary message. |
+| **Joystick Left** (X-Value > 600) | Calls `navigator.move('left')` (Scan/New Target). | Announces a scan left and the new target's name/type. |
+| **Joystick Right** (X-Value < 400) | Calls `navigator.move('right')` (Scan/New Target). | Announces a scan right and the new target's name/type. |
-- **`qwiic_1_button.py`**: Basic example for reading a single Qwiic Button (default address 0x6F). Run with:
- ```bash
- python qwiic_1_button.py
- ```
+------------------------------------------------------------------------
-- **`qwiic_button_led_demo.py`**: Demonstrates using two Qwiic Buttons at different addresses (e.g., 0x6F and 0x6E) and controlling their LEDs. Button 1 toggles its own LED; Button 2 toggles both LEDs. Run with:
- ```bash
- python qwiic_button_led_demo.py
- ```
+4.
+* The main() function uses the argparse module to allow the user to optionally specify the output mode (mode speaker or mode silent) when running the script.
+* The program can be cleanly exited by pressing **Ctrl+C**.
-Here is a minimal code example for two buttons:
-```python
-import qwiic_button
+
-# Default button (0x6F)
-button1 = qwiic_button.QwiicButton()
-# Button with A0 soldered (0x6E)
-button2 = qwiic_button.QwiicButton(0x6E)
+Hardware
-button1.begin()
-button2.begin()
+* We began developing the hardware prototype, focusing on the following main considerations:
+* The device should be handheld, with the joystick positioned in an ergonomic spot for comfortable use.
+* The speaker should face the user to ensure clear audio output and prevent muffled sound.
+* The Raspberry Pi should have proper ventilation to avoid overheating, along with space for a battery to power the system.
-while True:
- if button1.is_button_pressed():
- print("Button 1 pressed!")
- if button2.is_button_pressed():
- print("Button 2 pressed!")
-```
+Most of these considerations were carried over from the cardboard prototype, though we made several design adjustments to improve ergonomics and overall usability.
-For more details, see the [Qwiic Button Hookup Guide](https://learn.sparkfun.com/tutorials/qwiic-button-hookup-guide/all#i2c-address).
+Below are images of the updated hardware prototype.
----
+
+
+
-### PCF8574 GPIO Expander: Add More Pins Over I²C
-Sometimes your Pi’s header GPIO pins are already full (e.g., with a display or HAT). That’s where an I²C GPIO expander comes in handy.
-
-We use the Adafruit PCF8574 I²C GPIO Expander, which gives you 8 extra digital pins over I²C. It’s a great way to prototype with LEDs, buttons, or other components on the breadboard without worrying about pin conflicts—similar to how Arduino users often expand their pinouts when prototyping physical interactions.
-
-**Why is this useful?**
-- You only need two wires (I²C: SDA + SCL) to unlock 8 extra GPIOs.
-- It integrates smoothly with CircuitPython and Blinka.
-- It allows a clean prototyping workflow when the Pi’s 40-pin header is already occupied by displays, HATs, or sensors.
-- Makes breadboard setups feel more like an Arduino-style prototyping environment where it’s easy to wire up interaction elements.
-
-**Demo Script:** `Lab 4/gpio_expander.py`
-
-
-
-
-
-We connected 8 LEDs (through 220 Ω resistors) to the expander and ran a little light show. The script cycles through three patterns:
-- Chase (one LED at a time, left to right)
-- Knight Rider (back-and-forth sweep)
-- Disco (random blink chaos)
-
-Every few runs, the script swaps to the next pattern automatically:
-```bash
-python gpio_expander.py
-```
-
-This is a playful way to visualize how the expander works, but the same technique applies if you wanted to prototype buttons, switches, or other interaction elements. It’s a lightweight, flexible addition to your prototyping toolkit.
-
----
-
-### Servo Control with SparkFun Servo pHAT
-For this lab, you will use the **SparkFun Servo pHAT** to control a micro servo (such as the Miuzei MS18 or similar 9g servo). The Servo pHAT stacks directly on top of the Adafruit Mini PiTFT (135×240) display without pin conflicts:
-- The Mini PiTFT uses SPI (GPIO22, 23, 24, 25) for display and buttons ([SPI pinout](https://pinout.xyz/pinout/spi)).
-- The Servo pHAT uses I²C (GPIO2 & 3) for the PCA9685 servo driver ([I2C pinout](https://pinout.xyz/pinout/i2c)).
-- Since SPI and I²C are separate buses, you can use both boards together.
-**⚡ Power:**
-- Plug a USB-C cable into the Servo pHAT to provide enough current for the servos. The Pi itself should still be powered by its own USB-C supply. Do NOT power servos from the Pi’s 5V rail.
-
-
-
-
-
-**Basic Python Example:**
-We provide a simple example script: `Lab 4/pi_servo_hat_test.py` (requires the `pi_servo_hat` Python package).
-Run the example:
-```
-python pi_servo_hat_test.py
-```
-For more details and advanced usage, see the [official SparkFun Servo pHAT documentation](https://learn.sparkfun.com/tutorials/pi-servo-phat-v2-hookup-guide/all#resources-and-going-further).
-A servo motor is a rotary actuator that allows for precise control of angular position. The position is set by the width of an electrical pulse (PWM). You can read [this Adafruit guide](https://learn.adafruit.com/adafruit-arduino-lesson-14-servo-motors/servo-motors) to learn more about how servos work.
+### Part F
----
+Here is our final video with a walkthrough of the AstroClicker prototype in both software and hardware:
+https://youtu.be/bRyzQZdn2rA
-### Part F
+### AI Contributions
-### Record
+Throughout this lab, we got help from Gemini with:
-Document all the prototypes and iterations you have designed and worked on! Again, deliverables for this lab are writings, sketches, photos, and videos that show what your prototype:
-* "Looks like": shows how the device should look, feel, sit, weigh, etc.
-* "Works like": shows what the device can do
-* "Acts like": shows how a person would interact with the device
+* Generating "final" images throughout the lab. We would often sketch a rough idea on paper, and then use Gemini to refine it into a presentable image.
+* Developing and documenting the code for the AstroClicker prototype.
+Everything else (ideating, eliciting feedback, designing and building the prototypes) was done by ourselves.
diff --git a/Lab 5/README.md b/Lab 5/README.md
index 73770087a4..eaeb221043 100644
--- a/Lab 5/README.md
+++ b/Lab 5/README.md
@@ -1,182 +1,37 @@
# Observant Systems
-**NAMES OF COLLABORATORS HERE**
-
-For lab this week, we focus on creating interactive systems that can detect and respond to events or stimuli in the environment of the Pi, like the Boat Detector we mentioned in lecture.
-Your **observant device** could, for example, count items, find objects, recognize an event or continuously monitor a room.
-
-This lab will help you think through the design of observant systems, particularly corner cases that the algorithms need to be aware of.
-
-## Prep
-
-1. Install VNC on your laptop if you have not yet done so. This lab will actually require you to run script on your Pi through VNC so that you can see the video stream. Please refer to the [prep for Lab 2](https://github.com/FAR-Lab/Interactive-Lab-Hub/blob/-/Lab%202/prep.md#using-vnc-to-see-your-pi-desktop).
-2. Install the dependencies as described in the [prep document](prep.md).
-3. Read about [OpenCV](https://opencv.org/about/),[Pytorch](https://pytorch.org/), [MediaPipe](https://mediapipe.dev/), and [TeachableMachines](https://teachablemachine.withgoogle.com/).
-4. Read Belloti, et al.'s [Making Sense of Sensing Systems: Five Questions for Designers and Researchers](https://www.cc.gatech.edu/~keith/pubs/chi2002-sensing.pdf).
-
-### For the lab, you will need:
-1. Pull the new Github Repo
-1. Raspberry Pi
-1. Webcam
-
-### Deliverables for this lab are:
-1. Show pictures, videos of the "sense-making" algorithms you tried.
-1. Show a video of how you embed one of these algorithms into your observant system.
-1. Test, characterize your interactive device. Show faults in the detection and how the system handled it.
-
-## Overview
-Building upon the paper-airplane metaphor (we're understanding the material of machine learning for design), here are the four sections of the lab activity:
-
-A) [Play](#part-a)
-
-B) [Fold](#part-b)
-
-C) [Flight test](#part-c)
-
-D) [Reflect](#part-d)
-
----
-
-### Part A
-### Play with different sense-making algorithms.
-
-#### Pytorch for object recognition
-
-For this first demo, you will be using PyTorch and running a MobileNet v2 classification model in real time (30 fps+) on the CPU. We will be following steps adapted from [this tutorial](https://pytorch.org/tutorials/intermediate/realtime_rpi.html).
-
-
-
-
-To get started, install dependencies into a virtual environment for this exercise as described in [prep.md](prep.md).
-
-Make sure your webcam is connected.
-
-You can check the installation by running:
-
-```
-python -c "import torch; print(torch.__version__)"
-```
-
-If everything is ok, you should be able to start doing object recognition. For this default example, we use [MobileNet_v2](https://arxiv.org/abs/1801.04381). This model is able to perform object recognition for 1000 object classes (check [classes.json](classes.json) to see which ones.
-
-Start detection by running
-
-```
-python infer.py
-```
-
-The first 2 inferences will be slower. Now, you can try placing several objects in front of the camera.
-
-Read the `infer.py` script and become familiar with the code. You can change the video resolution and frames per second (FPS). You may also use the weights of the larger pre-trained mobilenet_v3_large model, as described [here](https://pytorch.org/tutorials/intermediate/realtime_rpi.html#model-choices).
-
-#### More classes
-
-[PyTorch supports transfer learning](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html), so you can fine‑tune and transfer learn models to recognize your own objects. It requires extra steps, so we won't cover it here.
-
-For more details on transfer learning and deployment to embedded devices, see Deep Learning on Embedded Systems: A Hands‑On Approach Using Jetson Nano and Raspberry Pi (Tariq M. Arif). [Chapter 10](https://onlinelibrary.wiley.com/doi/10.1002/9781394269297.ch10) covers transfer learning for object detection on desktop, and [Chapter 15](https://onlinelibrary.wiley.com/doi/10.1002/9781394269297.ch15) describes moving models to the Pi using ONNX.
-
-### Machine Vision With Other Tools
-The following sections describe tools ([MediaPipe](#mediapipe) and [Teachable Machines](#teachable-machines)).
-
-#### MediaPipe
-
-A established open source and efficient method of extracting information from video streams comes out of Google's [MediaPipe](https://mediapipe.dev/), which offers state of the art face, face mesh, hand pose, and body pose detection.
-
-
-
-To get started, install dependencies into a virtual environment for this exercise as described in [prep.md](prep.md):
-
-Each of the installs will take a while, please be patient. After successfully installing mediapipe, connect your webcam to your Pi and use **VNC to access to your Pi**, open the terminal, and go to Lab 5 folder and run the hand pose detection script we provide:
-(***it will not work if you use ssh from your laptop***)
-
-
-```
-(venv-ml) pi@ixe00:~ $ cd Interactive-Lab-Hub/Lab\ 5
-(venv-ml) pi@ixe00:~ Interactive-Lab-Hub/Lab 5 $ python hand_pose.py
-```
-
-Try the two main features of this script: 1) pinching for percentage control, and 2) "[Quiet Coyote](https://www.youtube.com/watch?v=qsKlNVpY7zg)" for instant percentage setting. Notice how this example uses hardcoded positions and relates those positions with a desired set of events, in `hand_pose.py`.
-
-Consider how you might use this position based approach to create an interaction, and write how you might use it on either face, hand or body pose tracking.
-
-(You might also consider how this notion of percentage control with hand tracking might be used in some of the physical UI you may have experimented with in the last lab, for instance in controlling a servo or rotary encoder.)
-
-
-
-#### Moondream Vision-Language Model
-
-[Moondream](https://www.ollama.com/library/moondream) is a lightweight vision-language model that can understand and answer questions about images. Unlike the classification models above, Moondream can describe images in natural language and answer specific questions about what it sees.
-
-To use Moondream, first make sure Ollama is running and pull the model:
-```bash
-ollama pull moondream
-```
-
-Then run the simple demo script:
-```bash
-python moondream_simple.py
-```
-
-This will capture an image from your webcam and let you ask questions about it in natural language. Note that vision-language models are slower than classification models (responses may take up to minutes on a Raspberry Pi). There are newer models like [LFM2-VL](https://huggingface.co/LiquidAI/LFM2-VL-450M-GGUF), but many are very recent and not yet optimized for embedded devices.
-
-**Design consideration**: Think about how slower response times change your interaction design. What kinds of observant systems benefit from thoughtful, delayed responses rather than real-time classification? Consider systems that monitor over longer time periods or provide periodic summaries rather than instant feedback.
-
-#### Teachable Machines
-Google's [TeachableMachines](https://teachablemachine.withgoogle.com/train) is very useful for prototyping with the capabilities of machine learning. We are using [a python package](https://github.com/MeqdadDev/teachable-machine-lite) with tensorflow lite to simplify the deployment process.
-
-
-
-To get started, install dependencies into a virtual environment for this exercise as described in [prep.md](prep.md):
-
-After installation, connect your webcam to your Pi and use **VNC to access to your Pi**, open the terminal, and go to Lab 5 folder and run the example script:
-(***it will not work if you use ssh from your laptop***)
-
-
-```
-(venv-tml) pi@ixe00:~ Interactive-Lab-Hub/Lab 5 $ python tml_example.py
-```
-
-
-Next train your own model. Visit [TeachableMachines](https://teachablemachine.withgoogle.com/train), select Image Project and Standard model. The raspberry pi 4 is capable to run not just the low resource models. Second, use the webcam on your computer to train a model. *Note: It might be advisable to use the pi webcam in a similar setting you want to deploy it to improve performance.* For each class try to have over 150 samples, and consider adding a background or default class where you have nothing in view so the model is trained to know that this is the background. Then create classes based on what you want the model to classify. Lastly, preview and iterate. Finally export your model as a 'Tensorflow lite' model. You will find an '.tflite' file and a 'labels.txt' file. Upload these to your pi (through one of the many ways such as [scp](https://www.raspberrypi.com/documentation/computers/remote-access.html#using-secure-copy), sftp, [vnc](https://help.realvnc.com/hc/en-us/articles/360002249917-VNC-Connect-and-Raspberry-Pi#transferring-files-to-and-from-your-raspberry-pi-0-6), or a connected visual studio code remote explorer).
-
-
-
-Include screenshots of your use of Teachable Machines, and write how you might use this to create your own classifier. Include what different affordances this method brings, compared to the OpenCV or MediaPipe options.
-
-#### (Optional) Legacy audio and computer vision observation approaches
-In an earlier version of this class students experimented with observing through audio cues. Find the material here:
-[Audio_optional/audio.md](Audio_optional/audio.md).
-Teachable machines provides an audio classifier too. If you want to use audio classification this is our suggested method.
-
-In an earlier version of this class students experimented with foundational computer vision techniques such as face and flow detection. Techniques like these can be sufficient, more performant, and allow non discrete classification. Find the material here:
-[CV_optional/cv.md](CV_optional/cv.md).
+Refer to Nikhil Gangaram's Lab Hub for the code
+Sachin Jojode, Nikhil Gangaram, and Arya Prasad
### Part B
### Construct a simple interaction.
-* Pick one of the models you have tried, and experiment with prototyping an interaction.
-* This can be as simple as the boat detector shown in lecture.
-* Try out different interaction outputs and inputs.
+Our goal is to create a system that helps non-signers learn ASL through a customized, interactive experience. The ultimate vision is to accurately recognize the user’s hand gestures and, with the help of MoonDream, provide tailored feedback or responses. As an initial step, we built a basic gesture-recognition model using Teachable Machine. However, the model showed inconsistent results with significant variation. Below is an example of one of the better outcomes we achieved using Teachable Machine:
+
+
+We decided to shift our approach and use MoonDream, which aligned more closely with what we wanted to achieve. Our current plan for a basic interaction looks like this:
+
+1) The TTS model asks the user to perform a specific sign.
+2) The user signs it, and the gesture is sent to MoonDream.
+3) MoonDream identifies the gesture and provides feedback.
+4) The TTS model then reads this feedback aloud to the user.
+5) This process can continue in a loop, allowing for repeated and interactive practice.
+Our prototype:
-**\*\*\*Describe and detail the interaction, as well as your experimentation here.\*\*\***
+
### Part C
### Test the interaction prototype
-Now flight test your interactive prototype and **note down your observations**:
-For example:
-1. When does it what it is supposed to do?
-1. When does it fail?
-1. When it fails, why does it fail?
-1. Based on the behavior you have seen, what other scenarios could cause problems?
+During our prototype testing, we discovered that lighting conditions and the timing of when the image is captured greatly affected the system’s accuracy. If the photo was taken too early or the lighting was not ideal, MoonDream often had trouble recognizing the gesture. We also noticed that the interaction did not reflect natural human communication. In a real learning environment with a teacher, there would be more subtle variation and flow, while our current setup feels rigid and repetitive. The prototype code can be found in moondream_sign.py.
**\*\*\*Think about someone using the system. Describe how you think this will work.\*\*\***
-1. Are they aware of the uncertainties in the system?
-1. How bad would they be impacted by a miss classification?
-1. How could change your interactive system to address this?
-1. Are there optimizations you can try to do on your sense-making algorithm.
+
+We noticed that learning a new language can already be frustrating, so if the system gives wrong feedback even once, users quickly lose trust in it. When we tried other tools, Google AI Live worked much better than our first version. This seems to be because it looks at a full video of the user instead of just one picture, which makes the interaction feel more natural and realistic. We plan to explore this idea more in the next part of the lab. Still, we found that Google’s model sometimes had trouble with longer clips or full conversations, often assuming everything the user signed was correct even when it wasn’t.
+
+Video: https://youtu.be/puvYo5_OJCY
### Part D
### Characterize your own Observant system
@@ -184,17 +39,32 @@ For example:
Now that you have experimented with one or more of these sense-making systems **characterize their behavior**.
During the lecture, we mentioned questions to help characterize a material:
* What can you use X for?
+Our device can be used to recognize and interpret ASL hand gestures. It helps non-signers learn basic signs by giving real-time feedback and using text to speech to make the interaction more accessible and engaging.
+
* What is a good environment for X?
+A good environment is one with bright, even lighting, a plain background, and a stable camera. The user should be in front of the camera with their hands fully visible and minimal movement in the background.
+
* What is a bad environment for X?
+A bad environment includes dim or uneven lighting, cluttered backgrounds, or multiple people in view. It also struggles with poor camera quality, motion blur, or when the internet connection is unstable.
+
* When will X break?
+It will break when gestures are done too quickly, when part of the hand is out of frame, or when lighting suddenly changes. Timing issues, like capturing a frame too early or too late, also cause errors.
+
* When it breaks how will X break?
+When it breaks, our device may misclassify the gesture, fail to respond, or repeat incorrect feedback. Sometimes it freezes or outputs random labels, confusing the user.
+
* What are other properties/behaviors of X?
+It is sensitive to lighting and angles, consistent in stable conditions, and quick to respond when inputs are clear. However, it doesn’t adapt to individual users yet and can’t recognize complex or blended gestures.
+
* How does X feel?
+It feels experimental and it is encouraging when it works but frustrating when it fails. The interaction feels mechanical but shows potential to become a supportive learning tool for sign language practice.
-**\*\*\*Include a short video demonstrating the answers to these questions.\*\*\***
+Video Link: https://youtu.be/ZU5NM-oH540
### Part 2.
Following exploration and reflection from Part 1, finish building your interactive system, and demonstrate it in use with a video.
**\*\*\*Include a short video demonstrating the finished result.\*\*\***
+h
+ttps://youtu.be/bRyzQZdn2rA
diff --git a/Lab 6/HandTrackingModule.py b/Lab 6/HandTrackingModule.py
new file mode 100644
index 0000000000..902c757fd4
--- /dev/null
+++ b/Lab 6/HandTrackingModule.py
@@ -0,0 +1,71 @@
+import cv2
+import mediapipe as mp
+import time
+
+
+class handDetector():
+ def __init__(self, mode=False, maxHands=2, detectionCon=0.5, trackCon=0.5):
+ self.mode = mode
+ self.maxHands = maxHands
+ self.detectionCon = detectionCon
+ self.trackCon = trackCon
+
+ self.mpHands = mp.solutions.hands
+ self.hands = self.mpHands.Hands(self.mode, self.maxHands,
+ self.detectionCon, self.trackCon)
+ self.mpDraw = mp.solutions.drawing_utils
+
+ def findHands(self, img, draw=True):
+ imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+ self.results = self.hands.process(imgRGB)
+ # print(results.multi_hand_landmarks)
+
+ if self.results.multi_hand_landmarks:
+ for handLms in self.results.multi_hand_landmarks:
+ if draw:
+ self.mpDraw.draw_landmarks(img, handLms,
+ self.mpHands.HAND_CONNECTIONS)
+ return img
+
+ def findPosition(self, img, handNo=0, draw=True):
+
+ lmList = []
+ if self.results.multi_hand_landmarks:
+ myHand = self.results.multi_hand_landmarks[handNo]
+ for id, lm in enumerate(myHand.landmark):
+ # print(id, lm)
+ h, w, c = img.shape
+ cx, cy = int(lm.x * w), int(lm.y * h)
+ # print(id, cx, cy)
+ lmList.append([id, cx, cy])
+ if draw:
+ cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED)
+
+ return lmList
+
+
+def main():
+ pTime = 0
+ cTime = 0
+ cap = cv2.VideoCapture(1)
+ detector = handDetector()
+ while True:
+ success, img = cap.read()
+ img = detector.findHands(img)
+ lmList = detector.findPosition(img)
+ if len(lmList) != 0:
+ print(lmList[4])
+
+ cTime = time.time()
+ fps = 1 / (cTime - pTime)
+ pTime = cTime
+
+ cv2.putText(img, str(int(fps)), (10, 70), cv2.FONT_HERSHEY_PLAIN, 3,
+ (255, 0, 255), 3)
+
+ cv2.imshow("Image", img)
+ cv2.waitKey(1)
+
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/Lab 6/README.md b/Lab 6/README.md
index c23ff6153b..398bc5956b 100644
--- a/Lab 6/README.md
+++ b/Lab 6/README.md
@@ -1,6 +1,6 @@
# Distributed Interaction
-**NAMES OF COLLABORATORS HERE**
+Viha Srinivas (me), Nikhil Gangaram, Sachin Jojode, Arya Prasad
For submission, replace this section with your documentation!
@@ -23,221 +23,49 @@ Build interactive systems where **multiple devices communicate over a network**
---
## Part A: MQTT Messaging
-
-MQTT = lightweight messaging for IoT. Publish/subscribe model with central broker.
-
-**Concepts:**
-- **Broker**: `farlab.infosci.cornell.edu:1883`
-- **Topic**: Like `IDD/bedroom/temperature` (use `#` wildcard)
-- **Publish/Subscribe**: Send and receive messages
-
-**Install MQTT tools on your Pi:**
-```bash
-sudo apt-get update
-sudo apt-get install -y mosquitto-clients
-```
-
-**Test it:**
-
-**Subscribe to messages (listener):**
-```bash
-mosquitto_sub -h farlab.infosci.cornell.edu -p 1883 -t 'IDD/#' -u idd -P 'device@theFarm'
-```
-
-**Publish a message (sender):**
-```bash
-mosquitto_pub -h farlab.infosci.cornell.edu -p 1883 -t 'IDD/test/yourname' -m 'Hello!' -u idd -P 'device@theFarm'
-```
-
-> **💡 Tips:**
-> - Replace `yourname` with your actual name in the topic
-> - Use single quotes around the password: `'device@theFarm'`
-
-**🔧 Debug Tool:** View all MQTT messages in real-time at `http://farlab.infosci.cornell.edu:5001`
-
-
-
-**💡 Brainstorm 5 ideas for messaging between devices**
-
----
+Brainstormed Ideas:
+1. Party Lights: Each Pi senses sound or light and sends colorful flashes to a shared web grid that reacts like a disco.
+2. Mood Wall: Each Pi sends a color based on room lighting or emotion, forming a shared “mood board.”
+3. Distributed Band: Each Pi plays a different sound when triggered, together they form live music.
+4. Presence Mirror: Each Pi lights up when someone is nearby, showing who’s “present” across locations.
+5. Fortune Machine: Each Pi sends a random value that combines into one group-generated “fortune” message.
## Part B: Collaborative Pixel Grid
-
-Each Pi = one pixel, controlled by RGB sensor, displayed in real-time grid.
-
-**Architecture:** `Pi (sensor) → MQTT → Server → Web Browser`
-
-**Setup:**
-
-1. **Sensor**
-
-#### Light/Proximity/Gesture sensor (APDS-9960)
-We use this sensor [Adafruit APDS-9960](https://www.adafruit.com/product/3595) for this exmaple to detect light (also RGB)
-
-
-
-Connect it to your pi with Qwiic connector
-
-
-
-We need to use the screen to display the color detection, so we need to stop the running piscreen.service to make your screen available again
-
-```bash
-# stop the screen service
-sudo systemctl stop piscreen.service
-```
-
-if you want to restart the screen service
-```bash
-# start the screen service
-sudo systemctl start piscreen.service
-```
-
-2. **Server** (one person on laptop):
-```bash
-cd "Lab 6"
-source .venv/bin/activate
-pip install -r requirements-server.txt
-python app.py
-```
-
-2. **View in browser:**
- - Grid: `http://farlab.infosci.cornell.edu:5000`
- - Controller: `http://farlab.infosci.cornell.edu:5000/controller`
-
-3. **Pi publisher** (everyone on their Pi):
-```bash
-# First time setup - create virtual environment
-cd "Lab 6"
-python -m venv .venv
-source .venv/bin/activate
-pip install -r requirements-pi.txt
-
-# Run the publisher
-python pixel_grid_publisher.py
-```
-
-Hold colored objects near sensor to change your pixel!
-
-
-
-**📸 Include: Screenshot of grid + photo of your Pi setup**
-
----
+
+
## Part C: Make Your Own
-**Requirements:**
-- 3+ people, 3+ Pis
-- Each Pi contributes sensor input via MQTT
-- Meaningful or fun interaction
+1. Project Description
-**Ideas:**
+For our final project, we’re developing gesture-controlled modules that serve as the foundation of our system. The concept involves using low-cost computers, like Raspberry Pis, to communicate with a central computer (our laptops) and collectively maintain an accessible global state. We designed two gestures inspired by American Sign Language (ASL) that allow users to modify this shared state across all connected devices. In our prototype, these gestures let users cycle through the colors of the rainbow in opposite directions.
-**Sensor Fortune Teller**
-- Each Pi sends 0-255 from different sensor
-- Server generates fortunes from combined values
+2. Architecture Diagram
+
+
-**Frankenstories**
-- Sensor events → story elements (not text!)
-- Red = danger, gesture up = climbed, distance <10cm = suddenly
+3. Build Documentation
+We broke our process down into 3 steps:
-**Distributed Instrument**
-- Each Pi = one musical parameter
-- Only works together
+pi-pi communication: https://www.youtube.com/watch?v=l3sK-Un6r_g
-**Others:** Games, presence display, mood ring
+gesture control: https://www.youtube.com/shorts/ilUMCtHcV4I?feature=share
-### Deliverables
+integration: https://www.youtube.com/shorts/WWuHhcyBsaM?feature=share
-Replace this README with your documentation:
+4. User Testing
+- Sachin’s girlfriend, Thirandi, tested the system while visiting.
+- She preferred not to be on camera but thought the concept was fun and creative.
+- She noted that the latency made the system feel unfinished, since the response wasn’t instantaneous.
+- She also mentioned that having only a few gestures made the interaction feel less intuitive.
-**1. Project Description**
-- What does it do? Why interesting? User experience?
-
-**2. Architecture Diagram**
-- Hardware, connections, data flow
-- Label input/computation/output
-
-**3. Build Documentation**
-- Photos of each Pi + sensors
-- MQTT topics used
-- Code snippets with explanations
-
-**4. User Testing**
-- **Test with 2+ people NOT on your team**
-- Photos/video of use
-- What did they think before trying?
-- What surprised them?
-- What would they change?
+Stephanie:
**5. Reflection**
-- What worked well?
-- Challenges with distributed interaction?
-- How did sensor events work?
-- What would you improve?
-
----
-
-## Code Files
-
-**Server files:**
-- `app.py` - Pixel grid server (Flask + WebSocket + MQTT)
-- `mqtt_viewer.py` - MQTT message viewer for debugging
-- `mqtt_bridge.py` - MQTT → WebSocket bridge
-- `requirements-server.txt` - Server dependencies
-
-**Pi files:**
-- `pixel_grid_publisher.py` - Example (RGB sensor → MQTT)
-- `requirements-pi.txt` - Pi dependencies
-
-**Web interface:**
-- `templates/grid.html` - Pixel grid display
-- `templates/controller.html` - Color picker
-- `templates/mqtt_viewer.html` - Message viewer
-
----
-
-## Debugging Tools
-
-**MQTT Message Viewer:** `http://farlab.infosci.cornell.edu:5001`
-- See all MQTT messages in real-time
-- View topics and payloads
-- Helpful for debugging your own projects
-
-**Command line:**
-```bash
-# See all IDD messages
-mosquitto_sub -h farlab.infosci.cornell.edu -p 1883 -t "IDD/#" -u idd -P "device@theFarm"
-```
-
----
-
-## Troubleshooting
-
-**MQTT:** Broker `farlab.infosci.cornell.edu:1883`, user `idd`, pass `device@theFarm`
-
-**Sensor:** Check `i2cdetect -y 1`, APDS-9960 at `0x39`
-
-**Grid:** Verify server running, check MQTT in console, test with web controller
-
-**Pi venv:** Make sure to activate: `source .venv/bin/activate`
-
-
----
-
-## Submission Checklist
-
-Before submitting:
-- [ ] Delete prep/instructions above
-- [ ] Add YOUR project documentation
-- [ ] Include photos/videos/diagrams
-- [ ] Document user testing with non-team members
-- [ ] Add reflection on learnings
-- [ ] List team names at top
-
-**Your README = story of what YOU built!**
-
----
+- The software modules were stable and reliable, thanks to using well-tested and proven technologies.
+- The computer vision pipeline in its early stages was jumpy and occasionally misread user gestures.
+- Arya refined the vision pipeline, greatly improving accuracy and responsiveness.
+- Sensor events from the camera act as triggers, which are sent through the MQTT network to update other Raspberry Pis.
+- Gemini assisted during the ideation phase, helping refine the written content, generate visuals for the sketch and control flow diagram, and support parts of the code development.
+- All team members contributed to both idea generation and software development throughout the project.
-Resources: [MQTT Guide](https://www.hivemq.com/mqtt-essentials/) | [Paho Python](https://www.eclipse.org/paho/index.php?page=clients/python/docs/index.php) | [Flask-SocketIO](https://flask-socketio.readthedocs.io/)
diff --git a/Lab 6/gesture_pub.py b/Lab 6/gesture_pub.py
new file mode 100644
index 0000000000..5612779c32
--- /dev/null
+++ b/Lab 6/gesture_pub.py
@@ -0,0 +1,138 @@
+# --- GESTURE PUBLISHER SCRIPT (Multi-Device Sync) ---
+# Detects thumb direction (left or right) and publishes the corresponding
+# color command via MQTT using the SyncDisplay class.
+
+import cv2
+import mediapipe as mp
+import time
+import numpy as np
+import sys
+# Import the Synchronization class
+from sync_display import SyncDisplay
+
+# --- Configuration ---
+FRAME_WIDTH = 640
+FRAME_HEIGHT = 480
+
+# Define the colors in RGB format (SyncDisplay uses R, G, B order 0-255)
+# Note: The original color values were BGR, they have been converted to RGB here.
+# (255, 0, 0) BGR -> (0, 0, 255) RGB (Blue)
+# (0, 255, 0) BGR -> (0, 255, 0) RGB (Green)
+
+COLOR_RIGHT_THUMB_RGB = (0, 255, 0) # Green (Thumb points Right)
+COLOR_LEFT_THUMB_RGB = (0, 0, 255) # Blue (Thumb points Left)
+COLOR_DEFAULT_RGB = (50, 50, 50) # Dark Gray (rest state)
+
+# --- Initialize MediaPipe Hand Detector ---
+mp_hands = mp.solutions.hands
+hands = mp_hands.Hands(
+ model_complexity=0,
+ min_detection_confidence=0.7,
+ min_tracking_confidence=0.5)
+
+# --- Initialize Webcam with Camera Index Fallback ---
+camera_index = 0
+cap = cv2.VideoCapture(camera_index)
+if not cap.isOpened():
+ camera_index = 1
+ print("Warning: Index 0 failed. Trying index 1...")
+ cap = cv2.VideoCapture(camera_index)
+
+if not cap.isOpened():
+ print("FATAL ERROR: Cannot open webcam at index 0 or 1. Check connection and permissions.")
+ sys.exit(1) # Use sys.exit(1) for cleaner shutdown
+
+cap.set(cv2.CAP_PROP_FRAME_WIDTH, FRAME_WIDTH)
+cap.set(cv2.CAP_PROP_FRAME_HEIGHT, FRAME_HEIGHT)
+print(f"Webcam initialized successfully at index {camera_index}.")
+
+# --- Initialize SyncDisplay in 'both' mode ---
+# 'both' means it broadcasts the command AND renders it locally on its own PiTFT
+try:
+ sync = SyncDisplay(mode='both')
+except Exception as e:
+ print(f"FATAL ERROR: Could not initialize SyncDisplay. Check MQTT config/internet. Error: {e}")
+ cap.release()
+ sys.exit(1)
+
+
+def draw_debug_info(frame, color_name, active_color_rgb):
+ """Draws the instructional text and a colored overlay for the VNC desktop."""
+ # Convert RGB color back to BGR for OpenCV display
+ active_color_bgr = (active_color_rgb[2], active_color_rgb[1], active_color_rgb[0])
+
+ overlay = np.full((FRAME_HEIGHT, FRAME_WIDTH, 3), active_color_bgr, dtype=np.uint8)
+ frame = cv2.addWeighted(frame, 0.5, overlay, 0.5, 0)
+
+ # Add text on top
+ text = f"THUMB: {color_name}"
+ cv2.putText(frame, text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)
+ cv2.putText(frame, "PiTFT/Sync: Showing this color. Press 'q' to quit.", (10, FRAME_HEIGHT - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6,
+(255, 255, 255), 1, cv2.LINE_AA)
+
+ return frame
+
+# --- Main Execution Loop ---
+try:
+ sync.clear()
+
+ while cap.isOpened():
+ success, frame = cap.read()
+ if not success:
+ time.sleep(0.05)
+ continue
+
+ frame = cv2.flip(frame, 1) # Flip for mirror view
+ rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
+ results = hands.process(rgb_frame)
+
+ color_name = "Neutral / Not Pointing"
+ active_color_rgb = COLOR_DEFAULT_RGB
+
+ if results.multi_hand_landmarks:
+ hand_landmarks = results.multi_hand_landmarks[0]
+
+ wrist_x = hand_landmarks.landmark[mp_hands.HandLandmark.WRIST].x
+ thumb_tip_x = hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_TIP].x
+
+ # --- Thumb Direction Logic ---
+ # If thumb tip is further right on the screen (smaller X value) than the wrist
+ if thumb_tip_x < wrist_x:
+ active_color_rgb = COLOR_RIGHT_THUMB_RGB
+ color_name = "RIGHT (Green)"
+
+ # If thumb tip is further left on the screen (larger X value) than the wrist
+ elif thumb_tip_x > wrist_x:
+ active_color_rgb = COLOR_LEFT_THUMB_RGB
+ color_name = "LEFT (Blue)"
+
+ # Draw the hand landmarks on the VNC frame
+ mp.solutions.drawing_utils.draw_landmarks(
+ frame,
+ hand_landmarks,
+ mp_hands.HAND_CONNECTIONS,
+ mp.solutions.drawing_styles.get_default_hand_landmarks_style(),
+ mp.solutions.drawing_styles.get_default_hand_connections_style())
+
+
+ # 1. Send Command to All Displays (Publish)
+ # Use *active_color_rgb to unpack the (R, G, B) tuple
+ sync.display_color(*active_color_rgb)
+
+ # 2. Draw Debug Info on VNC Desktop
+ display_frame = draw_debug_info(frame, color_name, active_color_rgb)
+ cv2.imshow('Gesture Publisher (VNC Debug)', display_frame)
+
+ # Break the loop if the 'q' key is pressed
+ if cv2.waitKey(5) & 0xFF == ord('q'):
+ break
+
+except Exception as e:
+ print(f"An error occurred: {e}")
+
+finally:
+ # Cleanup resources
+ sync.clear()
+ sync.stop()
+ cap.release()
+ cv2.destroyAllWindows()
\ No newline at end of file
diff --git a/Lab 6/gesture_test.py b/Lab 6/gesture_test.py
new file mode 100644
index 0000000000..1439f52b17
--- /dev/null
+++ b/Lab 6/gesture_test.py
@@ -0,0 +1,115 @@
+import cv2
+import time
+import HandTrackingModule as htm
+import math
+import sys
+
+# --- Configuration ---
+wCam, hCam = 640, 480
+# Center of the hand (based on typical hand detection frame)
+CENTER_X = wCam // 2
+CENTER_Y = hCam // 2
+# Thresholds for direction detection (adjust these based on testing)
+X_THRESHOLD = 50
+Y_THRESHOLD = 50
+# ---------------------
+
+def get_direction(lmList):
+ """
+ Analyzes the position of the index finger (landmark 8)
+ relative to the camera's center to determine pointing direction.
+ """
+ # Index finger tip is landmark 8
+ _, pointerX, pointerY = lmList[8]
+
+ # Calculate position relative to the center of the frame
+ diff_x = pointerX - CENTER_X
+ diff_y = pointerY - CENTER_Y
+
+ # --- Direction Logic ---
+ # Check for left/right (X-axis)
+ if diff_x > X_THRESHOLD:
+ horizontal = "RIGHT"
+ elif diff_x < -X_THRESHOLD:
+ horizontal = "LEFT"
+ else:
+ horizontal = ""
+
+ # Check for up/down (Y-axis - Note: Y-axis is inverted in OpenCV)
+ if diff_y < -Y_THRESHOLD:
+ vertical = "UP" # Lower Y-value is visually higher
+ elif diff_y > Y_THRESHOLD:
+ vertical = "DOWN"
+ else:
+ vertical = ""
+
+ # Combine directions (e.g., "UP-LEFT", or just "UP")
+ if horizontal and vertical:
+ return f"{vertical}-{horizontal}"
+ elif horizontal:
+ return horizontal
+ elif vertical:
+ return vertical
+ else:
+ return "CENTER/NEUTRAL"
+
+# --- Main Program ---
+def run_gesture_detector():
+ """Initializes camera and runs the detection loop."""
+ # Remove all cv2 window and audio control calls
+ # The original script had audio-related imports and functions. These are removed.
+
+ cap = cv2.VideoCapture(0)
+ cap.set(3, wCam)
+ cap.set(4, hCam)
+
+ # Exit if camera is not opened
+ if not cap.isOpened():
+ print("[ERROR] Could not open video stream or file.")
+ sys.exit(1)
+
+ detector = htm.handDetector(detectionCon=0.7)
+
+ pTime = 0
+ current_direction = "NO HAND DETECTED"
+
+ print("--- Gesture Direction Detector Started ---")
+ print("Press Ctrl+C to stop the script.")
+
+ try:
+ while True:
+ success, img = cap.read()
+ if not success:
+ print("[WARNING] Ignoring empty camera frame.")
+ time.sleep(0.1)
+ continue
+
+ img = detector.findHands(img, draw=False) # No drawing
+ lmList = detector.findPosition(img, draw=False)
+
+ new_direction = "NO HAND DETECTED"
+
+ if len(lmList) != 0:
+ new_direction = get_direction(lmList)
+
+ # Only print when the direction changes
+ if new_direction != current_direction:
+ current_direction = new_direction
+
+ # Calculate FPS for monitoring performance
+ cTime = time.time()
+ fps = 1 / (cTime - pTime) if pTime else 0
+ pTime = cTime
+
+ print(f"[{time.strftime('%H:%M:%S')}] FPS: {int(fps):<3} | Direction: {current_direction}")
+
+ time.sleep(0.05) # Small sleep to prevent 100% CPU usage
+
+ except KeyboardInterrupt:
+ print("\n--- Script stopped by user (Ctrl+C) ---")
+ finally:
+ cap.release()
+ # No cv2.destroyAllWindows() needed as no windows were opened
+
+if __name__ == '__main__':
+ run_gesture_detector()
\ No newline at end of file
diff --git a/Lab 6/mqtt_test.py b/Lab 6/mqtt_test.py
new file mode 100644
index 0000000000..80c72e753e
--- /dev/null
+++ b/Lab 6/mqtt_test.py
@@ -0,0 +1,4 @@
+from sync_display import SyncDisplay
+
+sync = SyncDisplay(mode='client')
+sync.start() # Keeps running, waiting for commands
diff --git a/Lab 6/rainbow_pub.py b/Lab 6/rainbow_pub.py
new file mode 100644
index 0000000000..c4a92fa012
--- /dev/null
+++ b/Lab 6/rainbow_pub.py
@@ -0,0 +1,172 @@
+# --- RAINBOW GESTURE PUBLISHER SCRIPT (Discrete ROYGBIV Cycle) ---
+# Detects a thumb-pointing gesture (Left or Right) as a single command
+# to cycle through the fixed ROYGBIV color sequence on all synchronized displays.
+
+import cv2
+import mediapipe as mp
+import time
+import numpy as np
+import sys
+import os
+import colorsys
+import threading
+
+# --- CRITICAL FIX: Ensure current directory is in path for SyncDisplay ---
+script_dir = os.path.dirname(os.path.abspath(__file__))
+if script_dir not in sys.path:
+ sys.path.append(script_dir)
+from sync_display import SyncDisplay
+
+
+# --- Configuration ---
+FRAME_WIDTH = 640
+FRAME_HEIGHT = 480
+COLOR_DEFAULT_RGB = (50, 50, 50) # Dark Gray (rest state)
+GESTURE_COOLDOWN_SEC = 0.5 # Time (in seconds) between acceptable gestures
+
+# --- ROYGBIV Color Sequence (RGB 0-255) ---
+ROYGBIV = [
+ (255, 0, 0), # R: Red
+ (255, 165, 0), # O: Orange
+ (255, 255, 0), # Y: Yellow
+ (0, 128, 0), # G: Green
+ (0, 0, 255), # B: Blue
+ (75, 0, 130), # I: Indigo
+ (238, 130, 238) # V: Violet
+]
+
+# --- State Variables for Cycling ---
+color_index = 0
+last_gesture_time = time.time()
+
+
+# --- Initialize MediaPipe Hand Detector ---
+mp_hands = mp.solutions.hands
+hands = mp_hands.Hands(
+ model_complexity=0,
+ min_detection_confidence=0.7,
+ min_tracking_confidence=0.5)
+
+# --- Initialize Webcam with Camera Index Fallback ---
+camera_index = 0
+cap = cv2.VideoCapture(camera_index)
+if not cap.isOpened():
+ camera_index = 1
+ print("Warning: Index 0 failed. Trying index 1...")
+ cap = cv2.VideoCapture(camera_index)
+
+if not cap.isOpened():
+ print("FATAL ERROR: Cannot open webcam at index 0 or 1. Check connection and permissions.")
+ sys.exit(1)
+
+cap.set(cv2.CAP_PROP_FRAME_WIDTH, FRAME_WIDTH)
+cap.set(cv2.CAP_PROP_FRAME_HEIGHT, FRAME_HEIGHT)
+print(f"Webcam initialized successfully at index {camera_index}.")
+
+
+# --- Initialize SyncDisplay in 'both' mode ---
+try:
+ sync = SyncDisplay(mode='both')
+except Exception as e:
+ print(f"FATAL ERROR: Could not initialize SyncDisplay. Check MQTT config/internet. Error: {e}")
+ cap.release()
+ sys.exit(1)
+
+
+def draw_debug_info(frame, color_name, active_color_rgb):
+ """Draws the instructional text and a colored overlay for the VNC desktop."""
+ # Convert RGB color back to BGR for OpenCV display
+ active_color_bgr = (active_color_rgb[2], active_color_rgb[1], active_color_rgb[0])
+
+ # Create a subtle color overlay for debug view
+ overlay = np.full((FRAME_HEIGHT, FRAME_WIDTH, 3), active_color_bgr, dtype=np.uint8)
+ frame = cv2.addWeighted(frame, 0.7, overlay, 0.3, 0) # 70% frame, 30% overlay
+
+ # Add text on top
+ text = f"COLOR: {color_name}"
+ cv2.putText(frame, text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (255, 255, 255), 2, cv2.LINE_AA)
+ cv2.putText(frame, "Point Thumb Left/Right to cycle ROYGBIV. Press 'q' to quit.",
+ (10, FRAME_HEIGHT - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 255), 1, cv2.LINE_AA)
+
+ return frame
+
+# --- Main Execution Loop ---
+try:
+ sync.clear()
+
+ while cap.isOpened():
+ current_time = time.time()
+ success, frame = cap.read()
+ if not success:
+ time.sleep(0.05)
+ continue
+
+ frame = cv2.flip(frame, 1) # Flip for mirror view
+ rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
+ results = hands.process(rgb_frame)
+
+ color_name = "Neutral"
+ active_color_rgb = ROYGBIV[color_index] # Always start with the current color
+
+ # --- Gesture Detection and Cycling Logic ---
+ gesture_detected = False
+
+ if results.multi_hand_landmarks:
+ hand_landmarks = results.multi_hand_landmarks[0]
+ wrist_x = hand_landmarks.landmark[mp_hands.HandLandmark.WRIST].x
+ thumb_tip_x = hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_TIP].x
+
+ # Check for pointing gesture (Left or Right)
+ if thumb_tip_x != wrist_x:
+ gesture_detected = True
+ color_name = "Pointing Detected"
+
+ # Check if enough time has passed since the last successful gesture
+ if (current_time - last_gesture_time) > GESTURE_COOLDOWN_SEC:
+
+ # Advance the color index (cycle through 0, 1, 2, ..., len-1)
+ # Removed the 'global color_index' line which caused the SyntaxError
+ color_index = (color_index + 1) % len(ROYGBIV)
+
+ active_color_rgb = ROYGBIV[color_index]
+ color_name = f"NEXT ({['R', 'O', 'Y', 'G', 'B', 'I', 'V'][color_index]})"
+ last_gesture_time = current_time # Reset cooldown timer
+
+ # Draw the hand landmarks on the VNC frame
+ mp.solutions.drawing_utils.draw_landmarks(
+ frame,
+ hand_landmarks,
+ mp_hands.HAND_CONNECTIONS,
+ mp.solutions.drawing_styles.get_default_hand_landmarks_style(),
+ mp.solutions.drawing_styles.get_default_hand_connections_style())
+
+
+ # If no gesture was detected and the hand is invisible, use the default gray (rest state)
+ if not gesture_detected and not results.multi_hand_landmarks:
+ active_color_rgb = COLOR_DEFAULT_RGB
+ color_name = "Resting"
+
+ # 1. Send Command to All Displays (Publish)
+ sync.display_color(*active_color_rgb)
+
+ # 2. Draw Debug Info on VNC Desktop
+ display_frame = draw_debug_info(frame, color_name, active_color_rgb)
+ cv2.imshow('Gesture Publisher (VNC Debug)', display_frame)
+
+ # Break the loop if the 'q' key is pressed
+ if cv2.waitKey(5) & 0xFF == ord('q'):
+ break
+
+except Exception as e:
+ print(f"\nAn error occurred: {e}")
+
+finally:
+ # Cleanup resources
+ if 'sync' in locals() and sync is not None:
+ sync.clear()
+ sync.stop()
+ print("SyncDisplay stopped.")
+
+ cap.release()
+ cv2.destroyAllWindows()
+ print("Webcam released. Program finished.")
\ No newline at end of file
diff --git a/Lab 6/sync_display.py b/Lab 6/sync_display.py
new file mode 100644
index 0000000000..1390731191
--- /dev/null
+++ b/Lab 6/sync_display.py
@@ -0,0 +1,283 @@
+#!/usr/bin/env python3
+"""
+Synchronized Display System for Multiple Raspberry Pis
+
+This module provides a simple way to synchronize displays across multiple Pis.
+One Pi broadcasts display commands, all Pis (including broadcaster) show the same thing.
+
+Usage:
+ # On the broadcaster Pi:
+ from sync_display import SyncDisplay
+ sync = SyncDisplay(mode='broadcast')
+ sync.display_color(255, 0, 0) # All Pis show red
+ sync.display_text("Hello!", color=(255, 255, 255)) # All Pis show text
+
+ # On client Pis (or run both modes on broadcaster):
+ from sync_display import SyncDisplay
+ sync = SyncDisplay(mode='client')
+ sync.start() # Will automatically display what broadcaster sends
+"""
+
+import board
+import digitalio
+from PIL import Image, ImageDraw, ImageFont
+import adafruit_rgb_display.st7789 as st7789
+import paho.mqtt.client as mqtt
+import json
+import time
+import uuid
+import threading
+
+# MQTT Configuration
+MQTT_BROKER = 'farlab.infosci.cornell.edu'
+MQTT_PORT = 1883
+MQTT_TOPIC = 'IDD/syncDisplay/commands'
+MQTT_USERNAME = 'idd'
+MQTT_PASSWORD = 'device@theFarm'
+
+
+class SyncDisplay:
+ """Synchronized display controller for multiple Pis"""
+
+ def __init__(self, mode='client', display_rotation=90):
+ """
+ Initialize synchronized display
+
+ Args:
+ mode: 'client', 'broadcast', or 'both' (client receives, broadcast sends, both does both)
+ display_rotation: Display rotation (0, 90, 180, 270)
+ """
+ self.mode = mode
+ self.display_rotation = display_rotation
+
+ # Setup display
+ self.disp, self.width, self.height = self._setup_display()
+ self.image = Image.new("RGB", (self.width, self.height))
+ self.draw = ImageDraw.Draw(self.image)
+
+ # Setup MQTT
+ self.client = mqtt.Client(str(uuid.uuid1()))
+ self.client.username_pw_set(MQTT_USERNAME, MQTT_PASSWORD)
+ self.client.on_connect = self._on_connect
+ self.client.on_message = self._on_message
+
+ # Connect to MQTT
+ try:
+ self.client.connect(MQTT_BROKER, port=MQTT_PORT, keepalive=60)
+ if self.mode in ['client', 'both']:
+ self.client.loop_start()
+ print(f"[OK] SyncDisplay initialized (mode={mode})")
+ except Exception as e:
+ print(f"[ERROR] MQTT connection failed: {e}")
+
+ def _setup_display(self):
+ """Setup MiniPiTFT display"""
+ # Pin configuration
+ cs_pin = digitalio.DigitalInOut(board.D5)
+ dc_pin = digitalio.DigitalInOut(board.D25)
+
+ # Backlight
+ backlight = digitalio.DigitalInOut(board.D22)
+ backlight.switch_to_output()
+ backlight.value = True
+
+ # Create display
+ spi = board.SPI()
+ disp = st7789.ST7789(
+ spi,
+ cs=cs_pin,
+ dc=dc_pin,
+ rst=None,
+ baudrate=64000000,
+ width=135,
+ height=240,
+ x_offset=53,
+ y_offset=40,
+ rotation=self.display_rotation
+ )
+
+ # Get dimensions after rotation
+ if self.display_rotation in [90, 270]:
+ width, height = 240, 135
+ else:
+ width, height = 135, 240
+
+ print(f"[OK] Display initialized ({width}x{height})")
+ return disp, width, height
+
+ def _on_connect(self, client, userdata, flags, rc):
+ """MQTT connected callback"""
+ if rc == 0:
+ if self.mode in ['client', 'both']:
+ client.subscribe(MQTT_TOPIC)
+ print(f"[OK] Subscribed to {MQTT_TOPIC}")
+ else:
+ print(f"[ERROR] MQTT connection failed: {rc}")
+
+ def _on_message(self, client, userdata, msg):
+ """MQTT message received - execute display command"""
+ try:
+ cmd = json.loads(msg.payload.decode('utf-8'))
+ cmd_type = cmd.get('type')
+
+ if cmd_type == 'color':
+ self._render_color(cmd['r'], cmd['g'], cmd['b'])
+
+ elif cmd_type == 'text':
+ self._render_text(
+ cmd['text'],
+ color=tuple(cmd.get('color', [255, 255, 255])),
+ bg_color=tuple(cmd.get('bg_color', [0, 0, 0])),
+ font_size=cmd.get('font_size', 20)
+ )
+
+ elif cmd_type == 'image':
+ # For future: support image sync
+ pass
+
+ elif cmd_type == 'clear':
+ self.clear()
+
+ except Exception as e:
+ print(f"[ERROR] Failed to process command: {e}")
+
+ def _render_color(self, r, g, b):
+ """Render solid color on display"""
+ self.draw.rectangle((0, 0, self.width, self.height), fill=(r, g, b))
+ self.disp.image(self.image)
+
+ def _render_text(self, text, color=(255, 255, 255), bg_color=(0, 0, 0), font_size=20):
+ """Render text on display"""
+ # Clear with background color
+ self.draw.rectangle((0, 0, self.width, self.height), fill=bg_color)
+
+ # Load font
+ try:
+ font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", font_size)
+ except:
+ font = ImageFont.load_default()
+
+ # Draw text (centered)
+ bbox = self.draw.textbbox((0, 0), text, font=font)
+ text_width = bbox[2] - bbox[0]
+ text_height = bbox[3] - bbox[1]
+ x = (self.width - text_width) // 2
+ y = (self.height - text_height) // 2
+
+ self.draw.text((x, y), text, font=font, fill=color)
+ self.disp.image(self.image)
+
+ # Public API - these methods broadcast to all displays
+
+ def display_color(self, r, g, b):
+ """Display solid color on all synchronized displays"""
+ cmd = {
+ 'type': 'color',
+ 'r': r,
+ 'g': g,
+ 'b': b,
+ 'timestamp': time.time()
+ }
+
+ # If in both mode, render locally too
+ if self.mode in ['both', 'client']:
+ self._render_color(r, g, b)
+
+ # If in broadcast mode, send to others
+ if self.mode in ['both', 'broadcast']:
+ self.client.publish(MQTT_TOPIC, json.dumps(cmd))
+
+ def display_text(self, text, color=(255, 255, 255), bg_color=(0, 0, 0), font_size=20):
+ """Display text on all synchronized displays"""
+ cmd = {
+ 'type': 'text',
+ 'text': text,
+ 'color': list(color),
+ 'bg_color': list(bg_color),
+ 'font_size': font_size,
+ 'timestamp': time.time()
+ }
+
+ # If in both mode, render locally too
+ if self.mode in ['both', 'client']:
+ self._render_text(text, color, bg_color, font_size)
+
+ # If in broadcast mode, send to others
+ if self.mode in ['both', 'broadcast']:
+ self.client.publish(MQTT_TOPIC, json.dumps(cmd))
+
+ def clear(self, color=(0, 0, 0)):
+ """Clear all displays"""
+ self.display_color(*color)
+
+ def start(self):
+ """Start client mode (blocking - keeps listening for commands)"""
+ if self.mode not in ['client', 'both']:
+ print("[WARNING] start() only works in client or both mode")
+ return
+
+ print("Listening for display commands... (Press Ctrl+C to exit)")
+ try:
+ while True:
+ time.sleep(0.1)
+ except KeyboardInterrupt:
+ print("\nShutting down...")
+ self.client.loop_stop()
+ self.client.disconnect()
+
+ def stop(self):
+ """Stop MQTT client"""
+ self.client.loop_stop()
+ self.client.disconnect()
+
+
+# Simple test/demo
+if __name__ == '__main__':
+ import sys
+
+ if len(sys.argv) < 2:
+ print("Usage:")
+ print(" python sync_display.py client # Listen for commands")
+ print(" python sync_display.py broadcast # Send test commands")
+ print(" python sync_display.py both # Do both")
+ sys.exit(1)
+
+ mode = sys.argv[1]
+ sync = SyncDisplay(mode=mode)
+
+ if mode == 'broadcast':
+ print("\nSending test commands...")
+
+ print("Red...")
+ sync.display_color(255, 0, 0)
+ time.sleep(2)
+
+ print("Green...")
+ sync.display_color(0, 255, 0)
+ time.sleep(2)
+
+ print("Blue...")
+ sync.display_color(0, 0, 255)
+ time.sleep(2)
+
+ print("Hello World...")
+ sync.display_text("Hello World!", color=(255, 255, 0), bg_color=(128, 0, 128))
+ time.sleep(2)
+
+ print("Done!")
+ sync.stop()
+
+ elif mode in ['client', 'both']:
+ # If both mode, send some test commands in background
+ if mode == 'both':
+ def send_test_commands():
+ time.sleep(3)
+ colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255)]
+ for i, (r, g, b) in enumerate(colors):
+ sync.display_color(r, g, b)
+ time.sleep(2)
+ sync.display_text("Synchronized!")
+
+ threading.Thread(target=send_test_commands, daemon=True).start()
+
+ sync.start()
\ No newline at end of file
diff --git a/README.md b/README.md
index 086eafada8..2dda8235c9 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-# [Your name here]'s-Lab-Hub
+# Viha's-Lab-Hub
for [Interactive Device Design](https://github.com/FAR-Lab/Developing-and-Designing-Interactive-Devices/)
Please place links here to the README.md's for each of your labs here:
@@ -15,7 +15,7 @@ Please place links here to the README.md's for each of your labs here:
[Lab 6. Little Interactions Everywhere](Lab%206/)
-[Final Project](https://github.com/IRL-CT/Developing-and-Designing-Interactive-Devices/blob/2025Fall/FinalProject.md)
+[Final Project](https://github.com/NikhilGangaram/IDD-ASL-Alexa)
Online Repository