paint-brush
AI Coding Tools Are Still in the R&D Stageby@@javar97
1,473 reads
1,473 reads

AI Coding Tools Are Still in the R&D Stage

by Ivan7mFebruary 11th, 2025
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

According to Stack Overflow's 2024 survey, 76% of developers are using or planning to use AI tools.
featured image - AI Coding Tools Are Still in the R&D Stage
Ivan HackerNoon profile picture
0-item
1-item
2-item

According to Stack Overflow's 2024 survey, 76% of developers are using or planning to use AI tools—they're now just part of the job. They help with mundane tasks but can be annoying when they confidently generate nonsense. While YouTubers build "billion-dollar startups" with ChatGPT prompts and AI agents are taking over the world every week, real teams are still figuring out how to use these tools effectively. Today, mastering AI assistance is as fundamental as coding or system design—we need to adapt and fast.


The problem is these tools are essentially still in an R&D stage—they change constantly, copy each other, and deserve credit for solving previously unsolved problems. They all lack clear usage guides. Even Copilot, despite being more established, lacks clear tutorials and best practices. The solution? We'll do what developers do best—organize and create a framework ourselves.

Game changer

...and also "Quantum leap", "Paradigm shift", "Agents are coming", you name it. While these tools are indeed transforming our workflow, the change is more practical: developers now operate like team leads, managing AI assistants instead of writing code directly. The core skills have shifted to designing, planning, describing, and reviewing.


The main UX concepts these tools introduce are:


  1. Inline suggestions - the most natural integration pattern. When they work, they're seamless—but when they disrupt your code, it's an immediate break in productivity.
  2. Chat - classic approach where LLMs interact with your codebase through guided prompts
  3. Composer ("Copilot Edit", "Cline act") - the new concept many find confusing. It's a shift towards autonomous agents. Composer works on several files and can apply changes and iterate automatically based on errors, linting problems etc.
  4. Agents - the anticipated next evolution—fully autonomous and personalized AI assistants integrated directly into your IDE.


Suggestions just work


Getting familiar with "suggestions" isn't complicated. AI assistants started with them, which first caught our attention. Chats are straightforward: insert files into context, iterate, apply, and validate the results.


Composer-type tools present more challenges to master, requiring a learning curve and some non-obvious approaches. Currently, the Cursor editor offers the most approachable "composer" tool, while Copilot follows closely with "Copilot Edits," having recently introduced agent-based workflows.


To become proficient with composers, you need to understand three key concepts:


  1. Instructions
  2. Rules
  3. Context


Let's examine each of these.

Instructions

As team leads rather than just developers, we should begin any new project or major feature by creating a Design Document or clear Product Requirements Document. This practice develops strong engineering and product thinking while saving substantial implementation time. The best part is that these documents can be:


  1. Generated with AI
  2. And then used as an instructions for composers


To create these documents, first gather requirements from humans, then consult the reasoning model in Chat. Both Copilot and Cursor have built-in reasoning models suited for this task. OpenAI's o1 and o3-mini are available by default, while Cursor's Chat supports DeepSeek-R1 (though not yet in its Composer) – all excellent tools for this purpose.


Reasoning models in Cursor's Chat


A good practice is storing design documents at your repo's top level (we'll use a requirements folder) organized by features, with a ProjectOverview.md in the root. Here's an example structure for a Twitter web app's requirements:


requirements/
├── ProjectOverview.md            # Core product description
└── Features/
   ├── Authentication.md          # User registration
   ├── Tweet.md                   # Tweet CRUD
   ├── UserProfile.md             # Profile management 
   ├── Engagement.md              # Likes, retweets
   ├── Infrastructure.md          # Storage, caching, etc
   └── ...


If everything is properly set up, adding a design document for a new feature is as simple as writing this prompt:


Creating instructions for the new feature


Storing instructions in your codebase offers clear advantages: version control, easy maintenance, and standard PR workflow. However, non-technical team members like Product Owners, managers, and UX designers may need access without using git. Here are three solutions:


1. Store everything in Notion, publish instruction pages, and inject them as documentation using @Docs shortcut

  1. Create a pipeline that converts Notion pages to .md files and stores them in the repository
  2. Teach your team to use git - the most beneficial option for the whole team


Once your instructions are accessible in your editor, switch to the composer and start implementing. This leads us to organizing the Rules.

Rules

Currently, only Cursor supports "rules" - direct implementation instructions for specific files/folders. This feature will likely spread to other editors, including VSCode Copilot, which currently only offers "prompt files" that can't be directly attached to the codebase.


Cursor's rules are more comprehensive - imagine CONTRIBUTING.md combined with linter rules and enhanced by LLM capabilities. These rules are product-agnostic, shareable, and effectively transfer knowledge, best practices, and implementation details across teams and library users.


Creating Cursor Rule


Rules can be created via the command palette and are stored in your project's .cursor/rules folder with a .mdc extension. This format enables advanced features like @mentioning specific files in your codebase. It's highly recommended to commit these rules to your repository and collaborate on improving them. Here's the workflow for using rules:


  1. Research cursor rules specific to your technology stack, starting with curated lists as references. For example, you can find well-written cursor rules for Next.js and React that serve as good templates.
  2. Update rules proactively during development. When you notice a pattern that could be formalized into a rule while writing code, document it immediately in your rules file.
  3. Learn from the best in the field. A new approach for library creators to share knowledge and increase adoption is creating specialized rules for AI assistants. I know of few companies doing this - Convex stands out by creating rules for both OpenAI and Anthropic models and sharing them in their documentation. While I haven't used their product, their focus on improving the developer experience through AI integration is compelling. Supabase is another great example.


Make sure that the rules are included. Look for the "ruler" icon in the files list


Many libraries urgently need AI rules. From a frontend developer's perspective, I would benefit from having them for TanStack Query, React Spring, Firebase and many-many more. These rules would save significant time and help prevent common mistakes that developers make when learning new technologies.

Context

Remember to include all relevant context - the more quality data you provide, the better results you'll get. Cursor editor has an advantage over Copilot here by allowing several types of context:


  1. Documentation - works really well, just provide it with entry point to any documentation, it will download it, parse it, and save for future needs
  2. Web search - not strictly context, it provides quick access to online resources
  3. Various development tools - specific git commits, lint errors, Notepads, and other artifacts
  4. MCP-servers can provide real-time context. Though setup is be a bit tricky, they're valuable when you need live data access.


Different types on contexts available in Cursor editor


After mastering these tools, your next step is optimizing both your individual and team performance. But what's the path forward from here?

Cline and Roo-Code. The Control

You'll always face a tradeoff between simplicity and control, between automated solutions and manual decision-making. If you're willing to dive deep and aren't afraid of dealing with bugs, performance challenges, and rough edges, consider trying Cline (or its fork Roo-Code, which has a slightly different philosophy).


Both tools are designed to provide as much clearance on what’s really going on under the hood as possible:


  1. They are open-source and subscription-free. Instead, you use your own LLM API keys or services like OpenRouter, paying only for what you use.
  2. Cline clearly shows all its operations, including which files it reads and modifies.
  3. Cline provides detailed insights into LLM communications, context window status, and the cost of each chat session.
  4. It features intuitive Plan/Act modes - a logical approach that other tools should consider adopting.


Cline let's you control the cost of every task


The killer feature is that Cline can actually run and debug your application - it's real and functional, as you'll see when you try it.


If all this interests you, check out Addy Osmani's recent article, which provides an excellent introduction to these editors.

Conclusion

Adopting these tools isn't a simple journey, and don't expect to write "the whole project from scratch in under 5 minutes." Yet, this is a clear path forward.


The technology is already there, but we're missing a robust AI-integrated workflow that organizes the entire team - not just developers, but crucially managers and designers - around these new tools. AI can feel intimidating, and sharing its impact may seem uncomfortable at first (like telling your team lead that AI wrote 80% of a feature through careful configuration). However, software development will only evolve when these tools become integral to team workflows. The most successful transitions happen in teams that foster open discussion about AI experiences, collaborative tool exploration, and actively contribute their learned best practices to the broader development community.