Article by Pushkar Deshmukh
Level Up Prompt Engineering with Cursor Rules & Skills (Practical Guide)
Most developers hit a wall with prompt engineering: repeating the same instructions over and over to get consistent AI results. In this guide, we explore how Cursor Rules and Skills eliminate that friction. You’ll learn how implicit prompt injection works, how results differ with and without structured rules, and how to apply these techniques in a real demo project.
Pushkar Deshmukh
Senior iOS Engineer

If you're serious about improving how you work with AI, read this carefully.
After this blog, you will:
Stop repeating the same instructions in every prompt
Get better results with fewer words
Understand how persistent context changes output quality
Be able to try this immediately in your own project
I've been experimenting with Cursor Rules and Cursor Skills recently, and the shift was bigger than I expected.
If you're building software with AI assistance, chances are you're over-prompting — and you don't even realise it.
The Hidden Cost of Prompt Engineering
What is Prompt Engineering?
At its core, prompt engineering means telling AI what you want and letting it figure out how to do it.
In software development:
You define the goal
AI handles the execution
You focus on outcomes
That's powerful. But here's the problem.
If you're repeating the same architectural instructions in every prompt, you're not engineering prompts. You're patching context manually. And that doesn't scale.
The Breaking Point
If you've used AI for development long enough, this will feel painfully familiar:
> "Use MVVM."
> "Use Combine, not Rx."
> "Follow TDD."
> "Avoid unused variables."
> "Keep business logic out of Views."
Every. Single. Time.
That repetition creates friction. It increases mental overhead. It defeats the whole point of automation.
This is where Cursor Rules and Cursor Skills change the game.
Why Cursor?
This blog is specifically about Cursor IDE.
Yes, the built-in AI is useful. But the real advantage isn't chat inside the editor — it's persistent intelligence.
You're not just prompting. You're configuring behaviour.
Cursor Rules — Persistent Context
Cursor Rules let you define project-level standards once.
Instead of saying this every time:
> "Use SwiftUI with Combine, follow MVVM, write tests first, avoid force unwraps..."
You define it in a .cursor/rules/ file, and it becomes implicit context for every interaction.
Example: .cursor/rules/ios-standards.md
# iOS Project Standards
## Architecture
- Follow MVVM pattern for all features
- ViewModels must be @Observable classes (or use ObservableObject with Combine)
- Views must NOT contain business logic — only UI binding
- All state mutations happen in the ViewModel
## Reactive Framework
- Use Combine for async operations
- Do NOT use RxSwift or third-party reactive libraries
- Prefer async/await where Combine is unnecessary
## Testing
- Follow strict TDD: write failing tests BEFORE implementation
- Every ViewModel must have a corresponding test file
- Use XCTest — do not use Quick/Nimble for new tests
- Test naming: test_whenCondition_shouldExpectedBehavior
## Code Quality
- No force unwraps !) unless explicitly justified with a comment
- No unused variables or imports
- Keep files under 200 lines — split if largerAfter this, the AI behaves as if it knows your standards — because you taught it permanently.
That's the shift:
> From repeating instructions → To defining constraints
Cursor Skills — Reusable Workflows
If Rules define what the AI should follow, Skills define how it should execute.
Think of Skills as structured playbooks — step-by-step workflows the AI follows when performing a task.
Example: .cursor/skills/ios-tdd-feature/SKILL.md
# Skill: iOS Feature Creation (TDD + MVVM)
## When to Use
Use this skill when creating any new feature in the iOS app.
## Workflow
### Step 1 — Define the Protocol
Create a protocol that defines the ViewModel's public interface.
### Step 2 — Write Failing Tests
Create a test file with test cases that cover:
- Initial state
- Happy path
- Error/edge cases
All tests must fail at this point (Red phase).
### Step 3 — Implement the ViewModel
Implement the ViewModel conforming to the protocol.
Use Combine for async operations.
Run tests — they should now pass (Green phase).
### Step 4 — Create the View
Build the SwiftUI View that binds to the ViewModel.
The View must contain ZERO business logic.
### Step 5 — Refactor
Clean up naming, extract reusable components, remove duplication.
All tests must still pass.Now instead of re-explaining your TDD approach every time, you just say "Create a calculator feature" and the AI follows the playbook.
Less noise. More consistency.
Same Prompt. Different Results.
Let's make this concrete.
Prompt:
> "Create a SwiftUI calculator component that supports addition only."
Without Rules and Skills
The AI might produce something like this:
struct CalculatorView: View {
@State private var num1: String = ""
@State private var num2: String = ""
@State private var result: Double = 0
var body: some View {
VStack {
TextField("First number", text: $num1)
TextField("Second number", text: $num2)
Button("Add") {
// Business logic directly in the View
let a = Double(num1) ?? 0
let b = Double(num2) ?? 0
result = a + b
}
Text("Result: \(result)")
}
}
}
// No tests. No ViewModel. Logic in the View.What went wrong:
Business logic lives inside the View
No ViewModel, no separation of concerns
No tests at all
No Combine usage
You'll need 2-3 follow-up prompts to fix this
With Rules and Skills
Same prompt. But now your Rules enforce MVVM + Combine, and your Skill defines the TDD workflow.
Step 1 — The AI writes tests first:
import XCTest
@testable import Calculator
final class CalculatorViewModelTests: XCTestCase {
private var sut: CalculatorViewModel!
override func setUp() {
super.setUp()
sut = CalculatorViewModel()
}
func test_whenInitialized_shouldHaveZeroResult() {
XCTAssertEqual(sut.result, 0)
}
func test_whenAddingTwoNumbers_shouldReturnCorrectSum() {
sut.firstNumber = "5"
sut.secondNumber = "3"
sut.add()
XCTAssertEqual(sut.result, 8)
}
func test_whenInputIsInvalid_shouldReturnZero() {
sut.firstNumber = "abc"
sut.secondNumber = "3"
sut.add()
XCTAssertEqual(sut.result, 0)
}
func test_whenInputIsEmpty_shouldReturnZero() {
sut.firstNumber = ""
sut.secondNumber = ""
sut.add()
XCTAssertEqual(sut.result, 0)
}
}Step 2 — Then the ViewModel:
import Foundation
import Combine
final class CalculatorViewModel: ObservableObject {
@Published var firstNumber: String = ""
@Published var secondNumber: String = ""
@Published private(set) var result: Double = 0
func add() {
let a = Double(firstNumber) ?? 0
let b = Double(secondNumber) ?? 0
result = a + b
}
}Step 3 — Then the View (no logic):
import SwiftUI
struct CalculatorView: View {
@StateObject private var viewModel = CalculatorViewModel()
var body: some View {
VStack(spacing: 16) {
TextField("First number", text: $viewModel.firstNumber)
.keyboardType(.decimalPad)
TextField("Second number", text: $viewModel.secondNumber)
.keyboardType(.decimalPad)
Button("Add") {
viewModel.add()
}
Text("Result: \(viewModel.result, specifier: "%.2f")")
.font(.title)
}
.padding()
}
}The difference isn't intelligence. It's configuration.
Aspect | Without Rules/Skills | With Rules/Skills |
|---|---|---|
Architecture | No pattern | Clean MVVM |
Tests | None | Written first (TDD) |
Business logic | Inside View | Isolated in ViewModel |
Combine usage | Missing | Used correctly |
Follow-up prompts needed | 2-3 | 0 |
Pro Tip: Let Cursor Create the Rules
Most developers miss this. You don't need to manually write everything from scratch.
You can simply tell Cursor:
> "I want my iOS project to follow MVVM with Combine. Every new feature should use TDD. Generate Cursor rules and a skill for this."
Cursor will scaffold the structure for you — the .cursor/rules/ and .cursor/skills/ directories, with starter content you can refine.
After that:
New features follow MVVM automatically
ViewModels are structured properly
TDD becomes default behavior
AI stops feeling like autocomplete. It starts feeling like part of your engineering system.
How to Try This Right Now
Here's a 15-minute experiment.
Step 1: Create a Small SwiftUI Project
Keep it simple — a calculator, counter, or login form.
Step 2: Run a Feature Prompt WITHOUT Rules
"Create a SwiftUI addition feature with ViewModel and tests."
Save the output.
Step 3: Define Rules
Create .cursor/rules/ios-standards.md with your architecture, testing, and code quality standards.
Step 4: Create a Skill
Create .cursor/skills/ios-tdd-feature/SKILL.md with your step-by-step feature creation workflow.
Step 5: Run the EXACT Same Prompt Again
But this time, don't mention TDD or MVVM in the prompt. Just say:
"Create a SwiftUI addition feature."
Step 6: Compare
Metric | Before | After |
|---|---|---|
Lines of code | More (mixed concerns) | Less (clean separation) |
Follow-up prompts | 2-3 | 0 |
Tests | Missing or incomplete | Comprehensive, written first |
Architecture | Inconsistent | Matches your standards |
You'll see the difference immediately.
The Real Shift
Most developers ask:
> "How do I write better prompts?"
The better question is:
> "How do I design AI behaviour so I don't have to repeat myself?"
Prompt engineering is phase one. You learn what to say.
AI workflow design is phase two. You build systems that make saying less deliver more.
Rules and Skills move you into phase two.
Why This Matters
If you're:
An engineer using AI daily — this saves you hours of repetition per week
A tech lead enforcing standards — this makes AI follow your standards by default
A founder building fast — this is leverage, not convenience
Final Thought
AI doesn't fail you. Unstructured usage does.
If you're still repeating architectural constraints in every prompt, you're not scaling your output.
Stop prompting harder. Start designing smarter.
If this changed how you think about working with AI, share it with your team. The shift from prompting to designing is the difference between using AI and leveraging it.




Comments
Loading comments…