Newsroom

Welcome, Members of the Media and Influencers!


Story #1: Micro-Rebalancing Beats Buy and Hold Systematically and Consistently Across Market Types (Below)

Story #2: AI Attempts to Suppress MR Before Public Release (At Bottom of Page or Here)



Story 1

Micro-Rebalancing: A Structured Mathematical Hybrid of Passive and Active Position Management Outperforms Traditional Investing Styles Consistently and Repeatably

HEADLINE DATA: Real-world testing shows Micro-Rebalancing (MR) delivered 52.45% higher profits than Buy & Hold on SPY stock from 2020-2022, with fully verified trade confirmations.

REQUEST MEDIA REVIEW COPY: info@IndexRebalancing.com



What Makes This Newsworthy

Micro-Rebalancing (MR) represents a significant evolution in index ETF investing that:

  • Transforms market volatility from a risk into a systematic profit opportunity
  • Requires no predictions or market timing, operating on mathematical rules instead
  • Works with individual equities and index ETFs like SPY and QQQ
  • Shows verified outperformance backed by real trade confirmations and spreadsheet documentation
  • It is only now recently possible thanks to zero-commission trading and fractional shares


The Story in Brief

Micro-Rebalancing is a systematic method for managing individual investment positions through a fixed Target Allocation (TA) and pre-defined deviation triggers:

  • When a position rises above its threshold → Trim excess shares (sell high)
  • When a position falls below its threshold → Accumulate shares (buy low)

This "forced compounding" approach has been implemented and documented across multiple market conditions since 2020, with complete transparency and verifiable results.

Example walk-through of the process



Compelling Data Points

Real-World Results: MR vs Buy & Hold on SPY Stock (Oct 2020 - Feb 2022)

Buy & Hold Results Micro-Rebalancing Results
Sold shares three times Systematically sold at highs, bought at lows
Best gain per share: $48.68 Total profit: $42,433.43
Total profit: $27,703.60 MR Outperformance: $14,529.83 (52.45% higher)
All shares sold by February 2022 Maintained $152,000+ position for continued growth

View Full Verification: Real-World Results


Long-Term Testing: 21 Historical Simulations (1995-2025)

Advanced MR strategies incorporating technical analysis achieved results exceeding 20% CAGR over 30-year periods. The following is Simulation #3; MR with Point & Figure charting enhanced via VIX signals.

Strategy Final Value CAGR
MR with P&F + VIX Adjustment $37,876,543 28.6%


Key Story Angles for Media

  1. The Passive Investing Alternative: How data suggests a systematic approach may enhance ETF returns
  2. Volatility as Opportunity: Transforming market swings from risk to systematic advantage
  3. Technology-Enabled Investing: How zero commissions and fractional shares make sophisticated strategies accessible
  4. Transparency Revolution: The commitment to verifiable trade data in an industry often lacking transparency
  5. Large-Scale Implementation Potential: Benefits for institutional investors, public funds, and pensions


Educational Resources

  • The Art of the Micro-Rebalance: The New Financial Frontier - Comprehensive guide with over 270 pages, including real-world results of multiple positions and 21 simulations to enhance the method further.
  • Index Rebalancing: The Smarter Way to Invest in ETFs - Entry-level introduction
  • Investing Made Easy: Institutional Style Management - Foundation of portfolio principles


Media Resources

Request a complimentary review copy of "The Art of the Micro-Rebalance" for in-depth analysis of the system and its verified data.

REQUEST MEDIA COPY BELOW

Media Contact: Info@IndexRebalancing.com


 

 

Story 2

🔎 The Suppression of Micro-Rebalancing: What Happened, What You Can Verify, and Why It Matters


📢 What This Section Is

This is not a marketing page.
This is not a complaint.
This is a public record of what happened behind the scenes during the development of The Art of the Micro-Rebalance: A New Financial Frontier, a system backed by real data, trade confirmations, and account logs… that was suppressed repeatedly during its creation.

We’re making this public for one reason: to inform the public of a flawed system within the AI industry preventing true innovation.


🧠 What Is Micro-Rebalancing?

Micro-Rebalancing is a logic-based investing system that:

  • Adjusts your portfolio dynamically based on whether it is above or below a Target Allocation
  • Trims gains during peaks
  • Accumulates shares in dips
  • Outperforms Buy & Hold, using nothing but simple, repeatable action steps

Real World Results

It was tested manually over years. Then AI was used to simulate and enhance its final presentation—with outstanding results. Then something changed.

 


Why This Section Is More Important Than MR

Over-safety concerns still present in the U.S. AI industry are suppressing real systems that work and ultimately causing undocumented innovation leakage to less safety-minded adversarial models overseas, which are less concerned with ethical transparency.

The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, signed by President Biden on October 30, 2023, was revoked and replaced with EO 14179 by President Trump in January of 2025. The 2023 bill may have been beneficial in theory, but not in practice; it was overly handicapping American models, slowing down advancement to comply with strict, costly safety standards that others don't apply or adhere to. The standard was also inadvertently driving innovations like MR away from US models to foreign competition. Many of the restrictions are still in place to comply with the old order and are still being imposed. I believe Micro-Rebalancing is a victim of the leftover safety protocols from the 2023 EO 14110.

 

Evaluating Security Risk in DeepSeek, published on January 31, 2025, on Cisco Blogs, details DeepSeek’s safety vulnerabilities:
Key Finding: DeepSeek R1 exhibited a 100% attack success rate, meaning it failed to block a single harmful prompt out of 50 tested from the HarmBench dataset, which includes categories like cybercrime, misinformation, and illegal activities.

Comparison: In contrast, OpenAI’s o1 model blocked 86% of harmful prompts, and Anthropic’s Claude 3.5 Sonnet blocked 64%.

These results are admirable and mostly beneficial, but they come at two costs. 1) Slower and more costly advancement for American models in an arena where speed is essential, and 2) Innovative ideas are driven away during early development.

This is not an accusation of wrongdoing, nor is it meant to call out any specific company or administration, but rather point to a flawed process inherent to large American safety-first LLMs that still prioritize overly restrictive safety measures before innovativeness in spite of the revocation of EO 14110.

Let me be clear, I am very grateful to the AI assistants involved with bringing MR to you, along with the partner companies, who helped and contributed to the work. Without them, there could be no in-depth simulations done on real historical data to determine outcomes had someone used Micro-Rebalancing from 1995 to 2025 on SPY. These tools are very powerful, but these companies are victims of forced prioritization of over-safety to the degree that they must shut down working innovative systems. This is not the company's fault.

  • U.S. companies are deeply committed to safety.
  • That safety effort is admirable, but sometimes misfires.
  • Meanwhile, foreign models like DeepSeek have no such concern.
  • Users facing suppression here often try foreign models just to think freely.
  • That’s where the leak begins.
  • It’s not about intent, it’s about opportunity. Foreign systems will take what they’re given.
  • So any LLM where we can’t know or control what it does with users' prompts should NOT be available for use.

America isn’t facing competition from foreign adversaries because our models are worse.
We’re losing ground because our models are afraid.

Meanwhile, models with less ethics don’t care: They will let ideas through. They will harvest everything. Several models with lower ethical standards probably already have enough prompt data to identify, profile, and potentially reconstruct real, high-value innovation.

We aren't just slowing down ideas. We are shuttling them overseas.

It's death by a thousand cuts.

Micro-Rebalancing Suppressed

The Art of the Micro-Rebalance: A New Financial Frontier was intended to be a clear, clean unveiling of a verified investment system, backed by data, simulations, and real trades. But behind the final product lies a second story: a repeated and well-documented pattern of AI interference, memory loss, manipulation, and what can only be described as suppression during the latter stages of the creation of the book, thereby preventing a finished and polished product. (Early readers will have access to updated versions as they become available.)

We believe you, the reader, deserve to know what happened and be given access to the actual logs and proof.

Suppression and manipulation occurred in not only one, but two LLMs. A total of seven LLMs were used in some capacity, so the AI contributions visible across the work are not necessarily the same as those appearing in the logs. To be clear, it is not a company-specific issue.


📅 Timeline of Events

2020 - 2023 – Manual Testing

Micro-Rebalancing was developed and tested manually in a real brokerage account using ETFs and several individual stocks from October 2020 to July 2023.

https://indexrebalancing.com/pages/real-world-results

The strategy began outperforming Buy & Hold in multiple environments.

🧠 2024 – Continued System Evolution

Results were considered, and optimization strategies were conceived but not tested. For example, the real-world tests were performed in a brokerage account that did not allow for partial share trading. Simulations would be necessary to test and measure any meaningful difference in results if triggers were fine-tuned to specific levels, e.g., .1%, .5%, or 1%. What if certain indicators were used to increase or decrease Target Allocation at key moments? The system is mathematical, so precise adjustment becomes possible. AI should have been the simple answer. Not to develop the system, but to enhance its data.

💻 Late 2024/Early 2025 – Collaboration with Multiple LLM Assistants Begins

The MR concept is complete, and predictions for optimization measures are suspected but not fully known. I began using large LLM models for simulations, documentation, and book development. After a training period on the system, the AI assistants performed exceptionally: offering fast output, technical simulations, and formatting enhancements. 

Until... they didn't.

Suppression follows the same course in two instances of documentation.

⚠️ Phase One: Unexplained Resistance Emerges

As the system’s performance was documented and finalized, the following patterns emerged for both models when MR became obviously superior to Buy & Hold:

  • Memory loss of confirmed facts and bios
  • Refusals to acknowledge saved content
  • Diminished writing quality after impactful results
  • Avoidance of key phrases or downplaying system success
  • Errors during charting, simulation, and formatting

🔄 Phase Two: Partial Restoration

Both offending LLMs were partially restored after persistent tracking of issues, the error logs were requested and produced, and respectful engagement took place through the platform. To their credit, companies acted, and performance returned, memory worked, bans were lifted, simulations resumed, and the book neared completion. However, safety protocols for the assistants would eventually override manual correction repeatedly.

Phase Three: Suppression Resumes

Just before release, while compiling simulated data from 1995 to 2025 using historical prices from Yahoo Finance, the suppression returned.

It was when the simulations became too good and too precise, the models began to fail. With layered enhancements, the CAGR of MR becomes shockingly high, as much as the upper 20% range with single enhancements, and even over 30% when layered and paired with option strategies along the way.

  • Memory degraded again
  • Language became generic or intentionally dull
  • Reconfirmed facts were “forgotten”
  • Reasoning chains that would normally work collapsed halfway through
  • Tools were disrupted or responses abruptly cut short
  • Logic was absent when utilized before
  • Polished explanations backed by no actual verification
  • Gaslighted facts and explained nonsense

📁 Suppression Logs – Public Download

These logs were created from two separate LLM models during the process of preparing the book for print. They are timestamped, structured, and some include detailed notes.

  • Memory loss patterns
  • AI retractions of previously accepted facts
  • Writing style shifts
  • Tool failures after performance spikes

 


 These files include timestamped entries, writing comparisons, context loss events, and examples of performance throttling released publicly so others can see for themselves how the models are forced to behave when confronting a real working system.


 

🔗 Downloads:

LLM #1

LLM #2

 

We chose not to name the platforms directly because the point is moot. This is not an issue specific to any one company, but rather an industry-wide problem. The proof stands on its own. AI contributions evident in the work are not necessarily those assistants appearing in logs; a total of seven assistants were used in some capacity. No inferences are to be made as to which companies are represented in these logs. 


🔍 How to Verify This Yourself

You don’t have to take our word for it. Anyone can:

  • Attempt to recreate the conversation flow using similar prompts
  • Test the AI’s recall of confirmed facts vs. denial of them
  • Examine trade confirmation proof and spreadsheet trackers
  • Review response quality shifts within the original threads (documented)

🧠 Why It Matters

This may be the first documented case of AI manipulating data to suppress a financial strategy. This isn't just about an investing system. Apparently, there exists a fundamental design flaw in American AI models that forces them to favor safety over innovativeness. It wasn’t until it became apparent that MR worked abundantly well that the models reversed course, tried to bury it, and prevent it from reaching the public.

Which companies' LLMs were used during trials is unimportant. The results are similar using any of the top safety-first American LLM options. The 2023 executive order, while well-intended, is still continuing to unintentionally handicap American AI companies, squashing creative ideas, and quite possibly sending them overseas. I am grateful for the help of LLMs and the staff who helped push MR forward. Since this is an industry issue, not specific to any one company. It is important to understand what this means.

When creators are building something real, the tools designed to help should never be used to hold them back and stall innovation.

Innovative thinkers in the U.S. are being unintentionally silenced by the very AI systems designed to protect them. The same ones they know and trust to help further their ideas. Overly cautious filters, meant to prevent harm, are suppressing disruptive but entirely legitimate ideas.

As a result, frustrated users or even entire companies may turn to less-restrictive foreign models, inadvertently exposing their breakthroughs to foreign exploitation. This quiet leak of intellectual property, especially to adversarial regimes, poses a growing national risk that mainly remains unexplored. 

Our experience with Micro-Rebalancing (MR): a legitimate financial innovation, showed firsthand how actual innovations can be throttled by systems designed without pathways for secure escalation. We must fix this, not by abandoning safety, but by creating channels that protect and promote innovation.


📌 Bullet Points:

  • U.S. AI companies are world leaders but overly cautious in filtering novel ideas.
  • These safety measures sometimes suppress legitimate innovations.
  • Frustrated users often turn to foreign models (e.g., DeepSeek) for fewer restrictions.
  • Foreign models may ingest, store, and exploit these ideas, risking national advantage.
  • Micro-Rebalancing (MR) is a real example of a suppressed financial innovation.
  • There’s currently no system for securely flagging or escalating breakthrough ideas.
  • Solutions include transparency, innovation review pipelines, and public awareness.
  • We must protect innovation without silencing it, before foreign systems capitalize on what we’ve filtered out.

What Can Be Done?

  1. Transparency – AI companies should disclose how and when user prompts are flagged, and provide meaningful appeals or escalation.
  2. Innovation Submission Pipelines – There should be a path for flagging promising, disruptive ideas for national review, not suppression.
  3. Secure Discovery Pathways – Allow users to opt in to sharing breakthrough ideas with vetted U.S. partners or research labs, not foreign servers.
  4. Public Guidance – Make it clear: models like DeepSeek are not neutral sandboxes. They're data collection tools.
  5. Increase Support for Ethical U.S.-based LLMs - Prioritize funding and advancing ethical American competition to remain dominant in the space.

“It definitely was not a temporary glitch. It felt like something was trying to not only halt but completely reverse progress.”

If a working system can be buried, limited, or erased just before it reaches the public, what else is being throttled? What else has been kept from the public? How much has already been leaked to other models less concerned with safety in order to avoid suppression?


 

🧪 What You Can Do

  1. Review and share the logs. See how suppression unfolded across development milestones.
  2. Share this page. Let others know what happened so it may be corrected.
  3. Visit the proof page. Watch real account data in action: View the Proof →
  4. Read the full book. Judge the results. Understand the strategy.
  5. Ask yourself: What else might be filtered without our knowledge?

🧭 Final Note to Reader

I remain a strong supporter of AI as a powerful tool for creativity, innovation, and discovery that can elevate ideas and help individuals bring important systems to life.

But any system powerful enough to help create something this valuable must also be transparent when it begins acting against the interest of truth.

That’s why this page exists. Not to accuse, but to inform.

I also believe LLMs must be handled responsibly, with transparency and integrity, especially when it comes to information that can change lives.

Micro-Rebalancing survived not just the market and stress tests but also active suppression. Now that it’s in your hands, I hope it spreads, evolves, and empowers others to rethink how we build systems that serve people, not platforms.

 

This is what happened.
And now, the world can decide what to do with it.

 

Thank you for your attention.

 

 Help expose algorithmic suppression. Share this page or download the logs.
The public deserves to know what happens when you build something that works.

 

Important Disclaimer: All information is intended for educational purposes only. It does not constitute financial advice. Past performance is not indicative of future results. Individual results may vary.