Family Promises, Fulfilled
Hello, world! To kick off this blog, I’ll be talking about the very first project I’ve worked on with real-world application. This project is known as the Family Promise Tracker, a service for Family Promise of Spokane, Washington, a charitable organization working to house the homeless, and prevent families who are at risk of being homeless from losing their homes. Family Promise operates out of cities across the US, but for this project the scope is limited to Spokane. As an organization working to end homelessness, they require accurate metrics to keep track of which assistance is helping serve their communities best. Different demographics have different needs and different programs can serve some groups better than others. Single mothers may need childcare while veterans may need long-term medical aid, and our product aims to ensure that they’re getting connected to the right services for them! Our Tracker handles a database of all individuals who’re being served by Family Promise, categorizing them by age, familial status, veteran status, sex and other demographic standards, then linking them to services and programs that they are being served by. This tracker also is used to determine how often they visit or revisit, if they need follow-up assistance, as well as success outcomes. This tracker then displays those data points on charts, making it easy to see the causal relationships between services and demographics. With this problem’s solution at hand, Family Promise will be better able to keep metrics and map out strategies to shore up ailing programs as well as getting a better view of services that are serving people well.
One of my primary concerns entering this project was simply: Where are we starting? What has been laid down before us and what still needs to be done? There was no shortage of tasks to do, though much of the architecture and groundwork was laid down. The focus of our team was setting up the backend, having it take in information from the database and passing it to the frontend website, while being able to take data from the frontend and saving it to the server. With this primacy in mind, we successfully implemented the server, laying down the groundwork for the frontend to implement the graphs and data science to create predictive models based on the coorelations.
Endpoints in sight!
Due to the high volume of data we worked with, we needed to construct a means to handle and route it all to the correct tables. Thus our task was clear: create endpoints, seed data, and routers. In order for the frontend and data science to be able to make any of their features, we first had to set up these components in the server.
Endpoints are very straightforward pieces of code, simply lines of code that receive commands from the frontend to tell the server to fetch data from the tables. The real challenge was the size of the task, with dozens of different types of data needing specific tables to themselves. On top of this, many points relate to others, and as such need to be added to tables their respective entries on their tables. We used Knex Transactions to overcome this technical issue, chaining database functions off each other that make the prior function a reality. This informed our strategy: Many endpoints needed building! As such, every contribution to solving this problem followed a format that looked much like this:
While the exact code for each endpoint is not the same, they follow a very similar pattern that enabled us to work efficiently.
A Promising Future
Leaving off this project, I pass the torch to the next team to add their strength to this task. For them, my team and I leave endpoints made, a framework set up to pass information to and from the Data Science algorithms, seed data and tables ready for use, a deployed website and server, the ability to perform CRUD operations with the server, a web site scaffolded out, laying down the much-needed groundwork for future features!
We had initially planned to create an interactive map and dynamic graphs to showcase where programs and services were, and to easily see who was going where, but in order to achieve those goals, we would have needed more time than the allotted month given. This gives clear goals for the future of this product! With the prepwork finished, these ideas are the natural progression of what should come next: Mapping where the products and services are, and graphing out who is going where, in what volumes, what services are better suited for different demographics and the like. On the Data Science end, a future feature is a learning algorithm that matches people with programs and services that will best help them. Challenges I foresee are ones of backend management and maintenance, any changes to the backend functions need to be carefully proctored and reviewed, else sensitive information be placed in the wrong tables, or an endpoint sends back mountains of data instead of the requested information.
In the course of this project, I found myself growing as a developer. My teammates and team leaders gave me critical feedback to improve myself, to code better, to write better, to interview better, to BE better. “You can do better” is a double-edged sentiment. It can harm, but it can press one forward just as well. I have gained a respect for the developmental process and working within a team, as well as a newfound understanding and appreciation of code review.
Working on this product has helped my career immensely, having been granted the experience of what it actually takes to be a Web Developer, showing how the sausage was made, so to speak. I am confident that I can work in the field now, armed with the skills that Lambda taught me!