StrikeWindow

Frequently Asked Questions

Everything you need to know about solunar theory, the forecast data, and how the app works.

What is Solunar theory?

Solunar theory was developed by outdoor writer John Alden Knight in 1926. It proposes that fish and wildlife are most active during specific windows driven by the moon's position relative to the Earth and sun.

The theory identifies four daily periods. Two major periods occur when the moon is directly overhead (lunar transit) or directly underfoot (opposite transit) — these typically last 1–2 hours each. Two shorter minor periods coincide with moonrise and moonset. Activity peaks near the center of each window.

StrikeWindow calculates these periods for your exact location and date using precise astronomical positioning, then weights them by moon phase to produce an overall day score.

Does Solunar theory actually work?

The honest answer: it depends on who you ask, and what you mean by "work."

Anecdotally, generations of anglers and hunters have found solunar periods useful, and some people will plan their outings around them. There is also solid science behind lunar influence — moon phase drives tidal cycles, which directly affect baitfish movement and feeding patterns in coastal and tidal waters.

Controlled scientific studies are mixed. Some show statistically meaningful correlations between lunar position and fish activity; others find no significant effect once weather, season, and water temperature are accounted for.

Personally I've been skunked during supposedly great times to fish, and also had some great catches during them. I've known people who swear by them, and people who either don't know or don't care about them.

Probably the best view is that solunar periods are one useful signal among several. On a day with good weather and favorable conditions, timing your fishing or hunting around a major period is a reasonable edge — not a guarantee. No app can promise a strike.


Why isn't the forecast more accurate?

StrikeWindow uses Open-Meteo, a free and open-source weather API that pulls from global numerical weather models (GFS, ECMWF, and others). These are the same underlying models used by most consumer weather apps.

All forecasts degrade in accuracy the further out they project. The first 2–3 days are generally reliable; beyond 7 days, treat the forecast as a broad trend rather than a precise prediction. Localized effects — mountain valleys, coastal microclimates, lake-effect patterns — are difficult for global models to resolve at a hyperlocal level.

For dates beyond the 16-day forecast window, StrikeWindow automatically falls back to historical climate averages for your location, which give you a sense of typical conditions but are not a forecast.

Why aren't radar images animated?

The current radar view shows the most recently available radar frame for your area. Animating radar requires fetching, storing, and looping through many sequential images — which adds meaningful data usage and battery drain, particularly in the background.

Animated radar is something we want to add, and the groundwork is already in place. It's a matter of getting the implementation right without impacting performance or data costs for users on limited plans.

Is the Intel Breifing AI generated?

Yes. It uses an on-device AI model to attempt to distill the forecast and solunar data for the day into a useable summary. The prompt is crafted to try to eliminate silly results, like telling you to get out on the water when it's freezing, but the model can't look out the window and see that the lake is frozen, or know things like the season is closed. So your miliage may vary there.

Why can't I see the Intel Briefing?

Only certain devices are supported. You must have AiCore installed on an Android device, or be using a more recent iOS device. It was important to use a local model, and that ability isn't available on every device. If your device isn't supported or AiCore is not installed, or disabled the functionality is hidden.

AI? Is this app AI slop?

I'm a solo developer who has 30+ years experience writing software. This app is something that has been rolling around in my head in some form since I first got my hands on a phone with data and a color screen. I wrote my first solunar library around 2001, using examples from the 60's and 70's made available by the US Naval Observatory.

I have built quite literally dozens of versions of this app over the years, and have never been happy enough with the end result to release it to anyone. It didn't have the polish, or it didn't have this or that peice of data that I thought was needed to make it really useful.

AI tools made it possible for me to get the application over the finish line. It's riduculously useful for fixing small display issues or templating out a screen. Things that are tedious can become a lot more simple, This is not an example of AI doing the job of a human, this is an example of AI helping a human do a job (that they've wanted to do for years, but could never find the time to do it right).

Rest assured, the code in this app is reviewed by myself, and up to my standards in terms of being maintainable and of quality, even if I haven't keyed in every line of code. Any errors or miscalculations lie on my shoulders and are most likely a product of not taking into account of the little things, like the earth having a north and south hemisphere and forgetting about that thing I'd fix later.

Every application of any complexity out there for the past 20 years or so has some sort of generated code inside of it. Whether it's a simple tool that builds generated classes, the use of intellisense in a developer's IDE, or something that is completely 'vibe coded' it's a reality of how things are and have been done for a long time.