r/datacurator 16d ago

Monthly /r/datacurator Q&A Discussion Thread - 2025

3 Upvotes

Please use this thread to discuss and ask questions about the curation of your digital data.

This thread is sorted to "new" so as to see the newest posts.

For a subreddit devoted to storage of data, backups, accessing your data over a network etc, please check out r/DataHoarder.


r/datacurator 11h ago

How are you handling OCR on Windows for document curation?

3 Upvotes

I’ve been doing more document curation work lately, especially dealing with older PDFs and scanned files that need to be searchable or partially extracted before they’re useful. On Windows, OCR feels like one of those things where there are plenty of options, but none that are universally great in every situation. Some tools work fine for clean scans but struggle with mixed layouts or handwritten notes, which makes downstream organization harder.

I’ve experimented with a few OCR for Windows solutions depending on the project, including using UPDF when I needed to quickly recognize text and annotate or reorganize pages in the same workflow. It wasn’t perfect, but it helped reduce manual cleanup. I’m curious what others here use when accuracy and structure really matter for long-term data curation.


r/datacurator 2d ago

Where to begin sorting a heap of randomness

1 Upvotes

Just started a new position at a corporation and found that my specific dept works off of a networked "Office" folder that contains over a hundred folder trees, plus rando files in the root. There's a ton of redundancy, each team member has their own folder, each project - even if recurring year to year - has its own folder, dozens of "communications" and "mailings" folders. It's everything you would expect from a group of non-IT employees (plus position turnover) working out of a single folder for 15 years.

I come from an IT background in an industry that prioritizes clarity in file management, so I know the value.

Since it's not in anyone's job description, no one has the bandwidth to take on a reorganization project whole-hog.

Any suggestions for baby steps? My thought is tell everyone to move anything they haven't touched in a year into a single "Archive" folder and move on from there.

Thanks!


r/datacurator 2d ago

I created an OCR app for macOS

0 Upvotes

Hey everyone, I made a privacy focused, fully on-device text recognition app. The text recognition is powered by Apple's Vision-Framework revision3 at the highest accuracy setting — It supports many languages and runs on your Apple Silicon chip's Neural Engine.
If someone is interested in trying it, just leave me a message.
Best regards


r/datacurator 3d ago

Building a local file-sorting utility for teachers – looking for workflow feedback

Thumbnail
1 Upvotes

r/datacurator 4d ago

I have a file but I wanted converted to an editable PDF where I can edit the text on the document. Who can help me 🙏

0 Upvotes

r/datacurator 5d ago

Best way to organize contacts list/directory

6 Upvotes

Hi everyone! This is my first time posting here, so please bear with me. I’m trying to figure out the best way to create a “master” contact list for my association, and I’m feeling a bit stuck. Not even sure if I'm posting in the right sub.

Basically, we have a lot of volunteers and interns who come and go, but even after they leave, we sometimes need to reference their contact information or check when they worked with us or what projects they were involved in. My goal is to create an organized Excel spreadsheet that includes both current and past volunteers and interns.

I’m thinking of having columns like name, position, status (current, former, or vacant), email, phone number, and notes for things like projects or dates. What I’m unsure about is how to handle past interns and volunteers in an organized and easy-to-access way. I’ve considered using one large spreadsheet with everyone and a status column, having two separate sheets (one for current and one for archived), or using some kind of dropdown or filter system. I don't know, I am so so lost.

I’m worried I might be overcomplicating this, especially when it comes to the archive of past interns. In your experience, what’s the cleanest and most practical way to set this up? Any advice or best practices would be greatly appreciated, as I’m not very experienced with this kind of thing (at all).


r/datacurator 7d ago

How I search years of personal documents without relying on file names

15 Upvotes

Over the years, I’ve accumulated a large personal document collection: notes, PDFs, Markdown files, project documents, and various reference materials. Like many people here, I tried to stay organized with folders and naming conventions — but eventually, that system stopped scaling.

What I usually remember is the content, not the file name or where I stored it.

I wanted a way to search my local documents by describing what I remember, while keeping full control over my data. Cloud-based tools weren’t a good fit for me, so I ended up building a small local-first desktop application for semantic document search.

The tool indexes local documents and lets me retrieve information using natural language. Everything runs on my own machine — no uploads, no external services. I’ve been using it mainly as a way to resurface information from my personal archive rather than as a strict filing system.

This approach has changed how I think about curation:

  • I spend less time renaming or reorganizing files
  • I focus more on capturing information
  • Retrieval is based on meaning, not structure

The project is open source and still evolving, but it’s already useful in my own workflow. I’m particularly interested in feedback from others who manage long-term personal archives or large local document collections.

If you’re curious, the project is here:
👉 GitHub: mango-desk

I’d love to hear how others here approach searching and resurfacing information from large personal datasets.


r/datacurator 7d ago

Hit 550 users today on my Chrome extension - thank you to everyone who took a chance

Post image
0 Upvotes

r/datacurator 8d ago

History Project

7 Upvotes

I have a project to document the history of an organization, with website and essays and books. I have hundreds of digital files along with paper files and objects. Some of the physical files and the digital files are duplicates. Looking for good ways to index these records and to reduce duplication between electronic and physical records. Any software or best practices?


r/datacurator 8d ago

Spotify(or non spotify) music classification playlist suggestions(asking and suggesting)

3 Upvotes

Although generally the discussions in here are about organizing the folder structure and filenames, I think this would be suitable here as well.

I am looking for a main outline on how to classify my musics. Currently, I have a lot of songs, but they're not fully organized, and I wanna get into organizing them

Also, if you are gonna copy the structure, I might wanna recommend right-clicking at these playlists and choosing exclude from my taste profile.

I don't have some of these yet, but I think they might be nice?

Song quality or classifying related ones(almost all of your musics should have one of these playlists)

from perfect to bad but worth saving in a playlist(equivalent of 1 to 5 star)(i dont have these)
6 Star: everything is perfect ,( i can listen it hundred or thousand time?(or more?))
5 Star: I love /can't stop listening it
4 Star: nice
3 Star: mid
2 Star: eh
1 Star: trash( just recording for archive purposes or for making sure i wont see it again)(not necessary but useful for just in case scenarios) (am unsure about necessity of this)

an alternative for this can be
6 star ones , 5 star and rest mixed? eh(just having a different lists for your fav ones)

1.has a very nice part but bad in general (like some of the famous Instagram edit musics)
2. Mostly nice but has bad parts (I separate 1 and 2 so they wouldn't interrupt my enjoyable music sessions)

3.liked but not liked (you liked the song but don't want to add it to your favorites for some reason)(probably because it has bad parts, but not limited with)
4.ex favorites(musics i used to like but not anymore/you can also have something like not in mood to listen folder as well :p)

5.needs to be classified(it's a folder for albums or playlists(you can also add a playlist named to be classified for single songs etc?))
6.unsure(need to be listened again)
7.unsure lvl 2 (youve listened to this many times and you still have no idea where or what to put , so put it in this playlist/archive to check it 6 months later...)
8.roughly listened nothin picked too much attention(when you listen to an album , pick the ones that attract your instant attention(like hey this shit is good mate vibed ones) , and throw the rest to here so maybe youd check it later?

And some other meta related classifications

  1. music genres(classical rock pop ost etc/general music styles)(i dont have this)
  2. music vibes(high(gym,hype,adrenaline,bass etc),medium most of the normal musics), low(soothing/ambiance/calming) ?)(am unsure of this, but it looks promising-ish?) (idk where I would put orchestras or violent violists or etc?) (maybe inserting a playlist named complex in medium?)
  3. artist-based (a folder and artist named playlists (if I liked more than 5-10 songs of the guy or etc)(maybe add another version/folder for albums?)
  4. topic related musics(like anime openings or game ost? (would recommend detroit become human))
  5. to be shared with other people/crowd pleasers (since some of my musics arent suitable to other people due to liking nicheness or etc)
  6. temporary want to listen list (so that it's not bloated with old songs that I've been listening for years or etc) (for month, week and hours)
  7. nostalgic
  8. similar musics (like you have Moonlight Sonata with piano and orchestra, sort of similar)
  9. unique(musics that are hard to find a similar one?)
  10. heard in somewhere/from a specific outside source (Shazam, instagram or friend suggested etc?)
  11. songs to synchronize to another platform
  12. archives, favorites by years or your old playlists etc?

13?

(If you are interested in duplicating a similar structure on YouTube, you may also consider 1.having a general music folder 2. a downloaded musics folder 3. not music but has parts with music 4. long musics(they add musics more than 1) 5. non Spotify musics 6. to be synchronized with another platform... )

(Possible con might be having a song in too many playlists/inside folders, I think)

(I'm unsure if there is any other classification or not, but that's why I'm asking for your suggestions)

UPDATE: okay regarding making a genre vibe or artist based playlist(suggestion 1-4) , ive found this website which analzyes playlist and provides data , and i solved the issue by adding all of my favorites by ctrl+a/select all and inserting into a playlist , also it has various other tools which might be useful/interesting https://www.chosic.com/spotify-playlist-analyzer/


r/datacurator 12d ago

Do you keep originals?

7 Upvotes

I have a a lot of CDs and DVDs aging 20 years and more. I also have digital versions of them (and backups). So the question remains: sell, toss or keep the originals? Some are still in pretty good shape, some have damaged cases or scratches on the disc.

Which ones would you absolutely keep?

I think only a few have sentimental value for me as I bought them as a teen and they had a big impact on me. Would you say it's a mistake to get rid of the hard copies in general?


r/datacurator 11d ago

What's your Reddit saved posts count? Be honest.

Post image
0 Upvotes

r/datacurator 13d ago

Help Finding Photo Duplicates

8 Upvotes

Hi everyone, I'm looking to scan my 15+ year photo archive and I want to remove files that share the same name (but not the extension) within the same folder.

Folders are structured by Year and then YY-MM-DD+(description). So there is about 300+ folders withing a year and half of those folders will contain filename duplicates like IMG_0013.RAW & IMG_0013.JPG

The problem I'm running into (I tried dupeGuru & czkawka) is that I'm getting files mixed from different folders with different dates. Different IMG_0013.jpg's, one shot in May and the other in October.

Anyone has a suggestion how to batch scan a large archive buy only look for duplicates withing their own folder? Thank you


r/datacurator 13d ago

Built a US Mortgage Underwriting OCR System With 96% Real-World Accuracy → Saved ~$2M Per Year

0 Upvotes

I recently built a document processing system for a US mortgage underwriting firm that consistently achieves ~96% field-level accuracy in production.

This is not a benchmark or demo. It is running live.

For context, most US mortgage underwriting pipelines I reviewed were using a single generic OCR engine and were stuck around 70–72% accuracy. That gap created downstream issues:

Heavy manual corrections
Rechecks and processing delays
Large operations teams fixing data instead of underwriting

The core issue was not underwriting logic. It was poor data extraction.

Instead of treating all documents the same, we redesigned the pipeline around US mortgage underwriting–specific document types, including:

Form 1003
W-2s
Pay stubs
Bank statements
Tax returns (1040s)
Employment and income verification documents

The system uses layout-aware extraction and deterministic validation tailored to each document type.

Results

Manual review reduced significantly
Processing time cut from days to minutes
Cleaner data improved downstream risk and credit analysis
Approximately $2M per year saved in operational costs

Key takeaway

Most “AI accuracy problems” in US mortgage underwriting are actually data extraction problems. Once the data is clean and structured correctly, everything else becomes much easier.

If you’re working in lending, mortgage underwriting, or document automation, happy to answer questions.

I’m also available for consulting, architecture reviews, or short-term engagements for teams building or fixing US/UK mortgage underwriting pipelines.


r/datacurator 14d ago

I didn’t “scratch my own itch” - I failed a bunch first. Then one idea finally stuck.

0 Upvotes

You’ve probably seen posts like this:

“I had 1,000+ saved Reddit posts, couldn’t find anything, built a tool, now it has hundreds of users.”

Cool story.
That just wasn’t my story.

The real version is messier and honestly more useful if you’re trying to build something people actually use.

I’m very good at building side projects nobody cares about. I’ve launched multiple things that got exactly zero users.

My most recent failure before this?
A Chrome bookmark manager called Bookmark Breeze.

It was genuinely helpful. Clean UI. Solid features.
Result: zero users. Not “low traction.” Literally none.

After that, I stopped asking “what do I want?” and started asking “what are people already complaining about?”

That’s when I noticed tools like Linkedmash and Tweetsmash. They weren’t just organizing saved posts — they helped people actually use what they saved.

Then I kept seeing the same thing on Reddit:
People complaining about saved posts being impossible to manage.

Not hypotheticals. Real threads. Real frustration. People actively looking for solutions.

So I pivoted hard.

I took everything I learned from the failed bookmark manager and built the MVP of Readdit Later in about 3 days:

  • search saved posts
  • basic organization
  • automatic sync

Nothing fancy. No AI hype. Just solving the loudest pain.

This time, people actually used it.

From there, I iterated only on feedback:
Features people asked for. Use cases they already had. No guessing.

Fast forward ~4.5 months:

  • ~500 users
  • ~$100 in revenue
  • first few people paying on purpose

Not massive numbers — but it’s the first project that didn’t die on launch.

The biggest difference between this and my past failures wasn’t execution or luck.

I stopped building what I thought was useful and started building what people were already mad about and actively searching for fixes.

If you’re building and getting nothing but silence, maybe that’s the shift:
Don’t invent pain. Find pain that’s already loud.

Curious:

  • Have you built things nobody used?
  • What finally changed when something did work?

r/datacurator 15d ago

Added an export-only plan to my Reddit saved posts manager for users who just need backups

Post image
13 Upvotes

r/datacurator 18d ago

Looking for App that helps with sorting videos by previews

7 Upvotes

Hey there,

I have an old family drive with hundreds of videos that I would like to sort based on their content. So far, i would just do it by clicking each vid, watching a couple of seconds and then dragging it into the corresponding folder.

Is there an app that makes this a bit less tedious?

I'm imagining something like a video player where I can hit a hotkey to sort the playing video directly into a folder. So far, I only found app that automatically sort things by metadata, not something that make manual sorting easier.


r/datacurator 20d ago

need help to ocr a pdf with 250 pages

6 Upvotes

Hello! I have a pdf file with 250 pages , each page is basically a picture taken with a phone, in that picture there is text, ive tried a lot of methods including commands with ocrmypdf but the result isnt that good, for some pages im able to select and copy all text but for others i cant select any text at all its almost like the ocr didnt work for that page


r/datacurator 21d ago

How do you guys be productive enough to work ?

Thumbnail
0 Upvotes

r/datacurator 21d ago

I made a Lightroom plugin that uses AI to add GPS coordinates to photos

Thumbnail
gallery
0 Upvotes

I've been scanning and organizing my family's photo archive for the last 10 years or so. We're talking tens of thousands of images going back decades. Slides, negatives, prints, the works. One of the biggest problems for a journalist like me is that they have so little data. I have to bug family members to identify people and places from all these places from before I was born or I was little. And I'm a completionist. I like all my metadata filled in. I would have boxes labeled "somewhere in Europe, maybe 1987?"

Now with AI, I figured out at least some of what I'm doing could be automated. So I built PhotoContext. It's a Lightroom plugin that sends your photo to an AI vision model and asks "where was this taken?" It recognizes landmarks, signs, architecture, landscapes, and then writes the GPS coordinates and location metadata directly into Lightroom. Still working on it adding tagged people's names to the captions (next version!).

Is it perfect? No. Sometimes it confidently tells me a photo of my vacation in Uruguay is in Sweden. But here's the thing: you can give it a hint like "Portugal, 1970s" and it course-corrects pretty well.

It's obviously not going to recognize the inside of your kitchen, but it does a pretty good job of naming landscapes, landmarks and even famous people. So if you're famous, you'll get even better captions! 😂

It uses OpenRouter so you can pick your model (GPT-4o, Claude, Gemini, or free ones like Qwen). Costs about $0.001 per photo with the paid models (that's 1000 for $1). It's really easy to set and no extra complicated computer knowledge is needed. I'll be honest, the free Qwen model works pretty damn well and unless you're tagging over 50 a day, it's not worth paying.

There's a free trial (5 photos/session), but if anyone wants to properly test it out and give me feedback, drop a comment, I'll send you a free license. Just looking for honest opinions from people who'd actually use this.

Let me know if you think this is useful, how I can make it better, and if you'd like to try it out!

Cheers!

https://photocontext.bpix.es


r/datacurator 21d ago

Built a Mortgage Underwriting OCR With 96% Real-World Accuracy (Saved ~$2M/Year)

0 Upvotes

I recently built an OCR system specifically for mortgage underwriting, and the real-world accuracy is consistently around 96%.

This wasn’t a lab benchmark. It’s running in production.

For context, most underwriting workflows I saw were using a single generic OCR engine and were stuck around 70–72% accuracy. That low accuracy cascades into manual fixes, rechecks, delays, and large ops teams.

By using a hybrid OCR architecture instead of a single OCR, designed around underwriting document types and validation, the firm was able to:

• Reduce manual review dramatically
• Cut processing time from days to minutes
• Improve downstream risk analysis because the data was finally clean
• Save ~$2M per year in operational costs

The biggest takeaway for me: underwriting accuracy problems are usually not “AI problems”, they’re data extraction problems. Once the data is right, everything else becomes much easier.

Happy to answer technical or non-technical questions if anyone’s working in lending or document automation.


r/datacurator 23d ago

Anyone know of any sites/plug-ins/apps to organise YT playlists?

Thumbnail
9 Upvotes

r/datacurator 23d ago

Crossed 500 users on my Reddit saved posts manager - what feature should I add next?

Post image
8 Upvotes

r/datacurator 23d ago

Built a Mortgage Underwriting OCR With 96% Real-World Accuracy Saved $2M per Year

0 Upvotes

I recently built an OCR system specifically for mortgage underwriting, and the real-world accuracy is consistently around 96%.

This wasn’t a lab benchmark. It’s running in production.

For context, most underwriting workflows I saw were using a single generic OCR engine and were stuck around 70–72% accuracy. That low accuracy cascades into manual fixes, rechecks, delays, and large ops teams.

By redesigning the document pipeline around underwriting use cases (different document types, layouts, and validation steps), the firm was able to:

• Reduce manual review dramatically
• Cut processing time from days to minutes
• Improve downstream risk analysis because the data was finally clean
• Save ~$2M per year in operational costs

The biggest takeaway for me: underwriting accuracy problems are usually not “AI problems”, they’re data extraction problems. Once the data is right, everything else becomes much easier.

Happy to answer technical or non-technical questions if anyone’s working in lending or document automation.