Code navigation works well when a developer stays inside a single language in a single codebase. Hit F12, jump to the definition. Right-click a method, find all references. These interactions feel instant because the IDE has a complete, coherent model of the code: it knows every symbol, every type, every import chain. The moment that boundary expands to include a second language, however, that model fractures. The IDE knows its language but not the other. The developer sees a call, follows it to the edge of their current file, and then hits a wall: the function being called lives in a different language, possibly in a different repository, governed by different conventions the tool does not understand. From that point on, navigation becomes manual. The developer switches tools, searches by text, and hopes the result is what they were looking for.
Code Navigation Across Languages
SMART TS XL delivers unified symbol resolution and code navigation across every language in your environment.
Kliki siiaFor teams working in genuinely polyglot environments, this is not an occasional inconvenience. It is the default condition of every significant task. Enterprise systems routinely span COBOL and Java, JCL and SQL, Python and C++, or any number of combinations that reflect decades of technology decisions layered on top of each other. Each language boundary in that stack is a place where automated navigation stops and manual reconstruction begins. The friction compounds across every developer, every task, and every team until the cost becomes structural: slower onboarding, riskier changes, longer incident investigations, and a growing reliance on the few people who hold cross-language knowledge in their heads. As examined in the context of COBOLi staatilise analüüsi lahendused, the challenge of reasoning across language boundaries is not simply a tooling problem. It is a foundational obstacle to operating large, heterogeneous systems safely.
Understanding why this breakdown happens, and what it costs in practice, is the first step toward addressing it. This article traces the problem from its technical roots through its operational consequences, examines why commonly used tools fail at language boundaries, and explains what genuine cross-language navigation requires to function at enterprise scale.
What Code Navigation Actually Requires to Work
Code navigation is not a search operation. It is a resolution operation. When a developer asks “where is this function defined?”, the IDE does not scan files for matching text. It resolves the identifier against a structured model of the codebase: a parsed representation of every class, method, variable, and type that exists within scope, along with the relationships between them. That model is built during indexing, maintained continuously as files change, and queried instantly when a navigation action is triggered. The accuracy and completeness of the model determines the accuracy and completeness of every navigation result the developer receives.
This distinction between search and resolution matters because it defines the requirements for cross-language navigation. A text search can look across any files regardless of language because it is not reading the code as code. A navigation tool cannot function across language boundaries unless it has built a model that spans both languages, not simply a model of one language that can also find strings in files belonging to the other. Building that unified model is technically demanding in ways that single-language navigation is not, and the difficulty scales with the number of distinct languages involved. As explored in the detailed examination of andmete ja juhtimisvoo analüüs, code that operates correctly across execution paths requires structural understanding of the full path, not just the segments that fall within any single tool’s scope.
The three specific capabilities that code navigation requires, and that all fail in different ways at language boundaries, are symbol resolution, call graph construction, and dependency tracing. Each deserves examination on its own terms before considering how they interact in practice.
Symbol Resolution and Why It Breaks at Language Boundaries
Symbol resolution is the process of mapping an identifier in source code to its definition. In a single-language environment, this process is well-understood: the compiler or interpreter already performs it, and IDEs replicate that resolution logic using the same grammar and type system rules. The resolution is exact because the rules are unambiguous within one language.
At a language boundary, resolution requires a bridge model that can represent symbols from both languages in a unified structure and trace the connection from an identifier in language A to its corresponding definition in language B. That bridge does not exist in standard IDEs or language servers, because the Language Server Protocol was designed around the assumption that each language server handles one language. When a Java method calls a COBOL program through a defined interface, the Java language server understands the method call but cannot resolve the COBOL target. The developer sees the call, knows it goes somewhere, and cannot follow it without leaving the tool entirely.
Consider a representative example. A Java service invokes a COBOL program by name through a middleware layer:
Java
// Java service calling a COBOL program via a legacy middleware adapter
LegacyAdapter.invoke("CUSTINQ", customerRequest);
The Java IDE resolves LegacyAdapter.invoke without difficulty. It knows the method signature and can navigate to its implementation. But "CUSTINQ" is a string literal at the Java level. The IDE has no concept of COBOL program names, no understanding that CUSTINQ refers to a specific compiled program unit with its own data definitions and paragraph structure. Navigation stops at the string. The developer must manually locate the COBOL source, open it in a different editor, and begin reading without any structural context about how the program relates to the calling Java code.
Call Graph Construction Across Heterogeneous Codebases
A call graph is a data structure that represents which functions call which other functions throughout a codebase. IDEs use call graphs to implement features like “find all callers” and “call hierarchy,” which show a developer every path that leads to a given function and every function that a given function invokes. In a single-language environment, call graph construction is a natural byproduct of indexing the codebase.
In a multi-language environment, the call graph must span language boundaries to be complete. A call graph that terminates at every point where execution crosses into a different language is not a call graph of the system; it is a collection of partial graphs, one per language, with disconnected edges at every language boundary. For a developer tracing an execution path through a system that mixes languages, this means the trace terminates every time the path crosses a language boundary, requiring a manual step to pick it up again in the next language.
The problem is acute in mainframe environments, where a single business transaction might involve JCL orchestrating the execution sequence, COBOL programs performing the core business logic, and SQL queries reading and writing data. As detailed in the analysis of JCL to COBOL mapping, these three layers are deeply entangled: JCL defines what runs and in what order, COBOL defines what the programs do, and SQL defines what data they access. A call graph that covers only COBOL, or only JCL, or only SQL, describes a fragment of the system rather than the system itself. Tracing anything meaningful requires all three layers to be connected in a single model.
Dependency Tracing When Languages Share Data
Dependencies between components in a multi-language system are often mediated through shared data: a database table that COBOL writes and Java reads, a file that a batch job produces and an API consumes, or a message queue that a Python producer writes and a Go consumer reads. These data-mediated dependencies are real and consequential. A change to the table schema, the file format, or the message structure affects both the producer and the consumer, but they are not represented in any single language’s dependency model.
Dependency tracing in a multi-language environment therefore requires understanding not just code-to-code calls but data-to-code relationships: which programs read or write a specific table column, which services depend on a specific file format, which consumers are affected by a change in a message schema. This kind of tracing is entirely outside the scope of standard IDE navigation and requires a tool that models the full system, including the data layer, rather than treating each language’s code in isolation.
The Specific Ways Navigation Fails in Common Multi-Language Stacks
The failure modes of code navigation across languages are not abstract. They appear in specific, predictable situations that arise routinely in enterprise development environments. Examining them concretely makes clear why generic search tools cannot substitute for genuine cross-language navigation.
COBOL and Java: The Most Common Enterprise Boundary
The COBOL-to-Java boundary is the most prevalent language boundary in large enterprise systems, particularly in financial services, insurance, and government. Decades of COBOL investment coexist with Java modernization efforts in a hybrid architecture where COBOL handles batch processing and Java handles transaction processing and APIs. The two languages communicate through defined interfaces: CICS transactions, message queues, shared databases, and file-based handoffs.
Navigating across this boundary in practice reveals the depth of the problem. A Java developer investigating unexpected behavior in a transaction needs to follow the execution path into the COBOL batch program that processed the underlying data. The Java IDE shows where the interface is invoked. It cannot show what the COBOL program does with the input, what data it reads, what calculations it performs, or what it writes back. The developer needs COBOL expertise and COBOL tooling to continue, neither of which may be readily available on the Java-oriented team. The result is either a slow manual investigation or escalation to someone with the required knowledge, both of which represent navigation failures that cost time and increase incident duration.
On the COBOL side, the equivalent failure occurs when a COBOL developer needs to understand which Java services consume the data the COBOL program produces. Standard COBOL tooling has no model of Java code. The developer can see the COBOL program’s output, including a database write or a file update, but cannot follow that output forward to identify which Java services read it. Any change to the output format requires manual coordination with Java teams, because there is no tool that can enumerate the consumers automatically. COBOL modernization depends critically on resolving exactly this gap: until the full dependency chain is visible across both languages, safe modernization is not possible.
JCL and COBOL: Orchestration Without Visibility
JCL is the orchestration layer for mainframe batch processing. It controls which programs run, in what sequence, with what parameters, and against what files and datasets. The relationship between JCL and the COBOL programs it invokes is a fundamental structural dependency: change the JCL, and the behavior of the COBOL programs changes. Change a COBOL program’s expected input format, and the JCL datasets that feed it may need to change as well.
Standard COBOL analysis tools do not parse JCL. Standard JCL analysis tools do not parse COBOL. The connection between a JCL step that invokes PGM=CUSTINQ and the COBOL program named CUSTINQ exists in the running system but not in any single tool’s model. A developer using either tool in isolation cannot see the complete picture. They know what the JCL step invokes by name, but not what the program does. Or they know what the COBOL program does, but not how it is invoked, with what parameters, or in what job stream sequence.
This gap creates specific risks for production systems. A developer modifying a COBOL program’s working storage definitions may inadvertently change how the program handles data passed from a specific JCL step, without any tool warning that the change affects the JCL-defined execution context. A developer restructuring a JCL procedure may alter the sequence in which programs run, without any tool showing which COBOL programs depend on that sequence for correct operation. As detailed in the examination of JCL static analysis solutions, visibility into cross-program dependencies and dataset usage within JCL environments requires dedicated analysis that standard tools simply do not provide.
Here is what the same dependency looks like from each language’s perspective with standard tooling, versus what a unified model would show:
| What the developer sees | JCL-only view | COBOL-only view | Unified cross-language view |
|---|---|---|---|
| Program invocation | PGM=CUSTINQ (name only) | Pole näha | CUSTINQ is invoked by 3 JCL procedures with specific PARM values |
| Input datasets | DD names listed | Pole näha | Reads CUSTFILE (defined in CUSTMAST.JCL step 2) |
| Output datasets | DD names listed | Pole näha | Writes CUSTRPT (consumed by RPTPRT job) |
| Äriloogika | Pole näha | PROCEDURE DIVISION visible | Full flow from JCL invocation through COBOL logic to output |
| Muutuste mõju | Tundmatu | Tundmatu | 4 JCL procedures, 2 downstream COBOL programs, 1 database table |
Modern Language Stacks: Python, Go, and C# Across Services
In distributed systems built from modern languages, the navigation problem takes a different form. Rather than the COBOL-Java language gap, the challenge is the service boundary combined with the polyglot stack. A Python data processing service feeds a Go API that feeds a C# front-end. Each service is built with its own tools, its own IDE configuration, and its own dependency model. The connections between services exist at the API layer, but standard navigation tools have no model of inter-service API relationships.
A developer modifying a response structure in the Python service needs to know which fields the Go API depends on and which fields the C# front-end ultimately displays. Without cross-language, cross-service navigation, they must manually inspect each downstream service’s code, search for references to the relevant field names, and hope that naming conventions are consistent enough that the search is reliable. As discussed in the context of Go static analysis tools, even within a single Go service, understanding call hierarchies and tracking dependencies between modules is a non-trivial problem. Extending that problem across service boundaries and language boundaries simultaneously is an order of magnitude harder.
The same pattern applies to C# systems that call shared services written in Java, or Python pipelines that write to databases consumed by .NET applications. In each case, the standard tools for each language provide accurate navigation within that language and produce nothing useful at the boundary where execution crosses into a different language or service.
SQL and Application Code: The Invisible Data Layer
SQL is present in nearly every enterprise system, and yet it is the most consistently overlooked component of cross-language navigation. Application code writes SQL queries that reference table names, column names, join conditions, and stored procedures. The database schema defines those tables and columns. The relationship between application code and the database schema is a dependency that, if broken by a schema change, causes runtime failures. But standard IDEs treat SQL strings as strings, not as code with navigable structure.
A developer who changes a column name in a schema needs to find every reference to that column in every application, across every language, in every query. A text search for the column name is unreliable: short column names collide with variable names, log messages, and comments. A symbol-aware search requires a tool that models both the SQL schema and the application code referencing it, understands that "customer_id" in a Java query string is a reference to the database column customer_id, and can enumerate all such references across languages. Without that model, schema changes are manually intensive and statistically incomplete.
Why IDE Extensions and Language Servers Cannot Solve This
IDE extensions and language servers are designed to provide language-specific intelligence. They parse code according to a specific grammar, build a language-specific symbol index, and serve queries through the Language Server Protocol, which defines a standard interface for language features including go-to-definition, find-references, and hover documentation. The protocol is language-agnostic at the transport layer but language-specific in its content: each language server produces results for its own language only.
Connecting two language servers within the same IDE does not solve cross-language navigation. Each server has its own index. When a developer requests “find all references” for a symbol, the request goes to the language server for the current file’s language. That server returns references it knows about, which are limited to the files it has indexed. It does not query the other language server, and even if it did, there would be no shared symbol model through which to express cross-language relationships.
This is a structural limitation of the LSP architecture, not a configuration problem. It can be partially worked around in specific, narrow cases, such as a language server that also parses inline SQL within Python f-strings, but it cannot be generalized to arbitrary cross-language dependencies without building exactly the kind of unified multi-language model that goes beyond what any language server was designed to provide. The challenges that static analysis faces with metaprogramming within a single language illustrate the depth of the problem: if reasoning about dynamically generated code within one language requires specialized techniques, reasoning across multiple languages with different grammars and runtime models requires an entirely different architectural approach.
What Language Servers Deliver Well (and Where They Stop)
Language servers excel at the tasks they were designed for: real-time diagnostics, intelligent completion, single-language symbol resolution, and in-editor refactoring within a bounded scope. These capabilities are valuable and should not be dismissed. The problem is not that language servers are inadequate tools; it is that they are single-language tools applied to multi-language problems, and that mismatch produces predictable and costly failures at exactly the points where precision matters most.
The table below maps specific navigation tasks against what language servers deliver and where the gap begins:
| Navigation task | LSP within one language | LSP across language boundary |
|---|---|---|
| Go to definition | Exact, instant | Fails: stops at the call site |
| Find all references | Complete within indexed files | Incomplete: misses references in other languages |
| Call hierarchy | Accurate for single-language callers | Truncated: boundary callers are absent |
| Rename symbol | Safe within one language | Dangerous: renames miss cross-language usages |
| Mõju analüüs | Scoped to current language | Blind to downstream consumers in other languages |
Grep and Text Search: Why They Are Not an Acceptable Substitute
When language servers fail at boundaries, developers reach for text search. grep, IDE-level search, and platform search like GitHub Code Search all find strings in files without regard for language. They have no concept of “symbol” or “reference”, only string occurrences. For short, common identifiers this means enormous result sets that require manual filtering. For identifiers that exist in multiple languages with different meanings, results conflate distinct code elements that happen to share a name.
More dangerous than the noise is the incompleteness. Text search misses references where naming conventions differ across languages, where an identifier is constructed dynamically, where the connection is mediated by configuration or a name registry, or where the relationship is expressed through data rather than direct code reference. These gaps are not visible in the search results: the developer sees what the search found, has no way of knowing what it missed, and makes decisions based on an incomplete picture that appears complete. As examined in the broader context of static code analysis for maintainability, the inability to accurately reason about what code does and what it connects to is not a minor inconvenience, which is the root cause of technical debt accumulation, defects introduced during maintenance, and the growing cost of making changes safely.
The Operational Costs That Accumulate at Language Boundaries
The navigation failures described above do not manifest as single-incident problems. They accumulate across every task, every developer, and every team that operates in a multi-language environment. Understanding the cost requires looking at the recurring situations where navigation breaks down and calculating the aggregate effect.
Onboarding in Polyglot Teams Takes Significantly Longer
A developer joining a team that works in a single language and a single codebase can become productive relatively quickly. The IDE handles navigation, the code is self-documenting through its structure, and the mental model the developer builds reflects the actual system. A developer joining a team that works across multiple languages faces a fundamentally different situation. The tools do not navigate the boundaries, so the mental model must be built manually through documentation, pair programming, and trial and error.
This manual model-building takes weeks rather than days. The developer must learn not just the code in their primary language but also enough about the adjacent languages to understand what they call, what calls them, and what data flows across the boundaries. In large organizations with high turnover or frequent team rotations, this extended onboarding time is a recurring cost rather than a one-time investment. Every person who joins a polyglot team pays the full cost of reconstructing the cross-language mental model from scratch, because the tools provide nothing that carries it forward.
Production Incidents Last Longer When Traces Cross Language Boundaries
When a production incident requires tracing an execution path that crosses language boundaries, every boundary crossing is a manual step. The on-call developer, already operating under time pressure, must switch tools, search by text in a different language’s codebase, and manually connect the results to the trace they were building. In a system with three or four language layers, a complete root-cause investigation might require four or five such boundary crossings, each adding minutes to an investigation that is measured in the time its impact affects users.
The cumulative effect across an organization operating multiple multi-language services is a systematically elevated mean time to resolution for any incident that crosses a language boundary. This is not a failure of individual developers; it is a structural consequence of tooling that does not model the connections the system actually has. Organizations that have invested in cross-language visibility consistently report faster incident resolution as one of the most direct and measurable benefits, precisely because that investment removes the manual boundary-crossing steps that inflate investigation time.
Risky Changes Become Riskier Without Cross-Language Impact Visibility
Every change to shared code in a multi-language system carries undetermined risk until the full set of consumers across all languages is known. Without cross-language navigation, that risk is not determined before the change is made. It is discovered after, when broken consumers surface in testing or, worse, in production. This is not a rare failure mode. It is the standard outcome of maintaining shared data structures, interfaces, or utilities in a system where the downstream consumers speak different languages.
The conservative response to this uncertainty is excessive caution: larger test efforts, longer review cycles, more coordination meetings, and more frequent change freezes around critical periods. All of these are real costs that accumulate at every change cycle in a multi-language system. They represent time and effort spent compensating for the absence of cross-language navigation rather than invested in delivering value. The legacy modernization landscape is shaped in significant part by these accumulated costs: organizations reach for modernization because maintaining existing systems has become prohibitively expensive, and cross-language navigation failures are a major driver of that maintenance cost.
What Cross-Language Navigation Actually Requires
Solving code navigation across multiple languages requires building the unified model that language servers individually cannot provide. That model must satisfy several requirements that are necessary conditions for useful cross-language navigation, not optional enhancements.
A single shared symbol index spanning all languages. Every named element in every language, including functions, classes, fields, procedures, tables, and data definitions,must be represented in one index with a common identity model. The identity of a symbol cannot be language-specific if cross-language references are to be resolved against it.
Language-aware parsers for every language in the system. Each language must be parsed using its own grammar, not approximated by a generic parser or by pattern matching. The structural output of each parser must map to the shared identity model so that cross-language relationships can be expressed as connections between correctly identified symbols.
Explicit modeling of inter-language interfaces. The mechanisms through which different languages interact, including program invocations by name, database tables, file formats, message schemas, and API contracts,must be represented in the model as first-class connection types, not treated as opaque strings or left out of the model entirely.
Dependency tracing that includes data-layer relationships. The model must represent not just code-to-code calls but data-mediated dependencies, because in multi-language systems, data is often the primary medium through which one language’s output becomes another language’s input.
Query performance that supports interactive navigation. The index must support sub-second query response for common navigation operations. A model that requires batch analysis runs rather than interactive queries is useful for offline impact analysis but cannot substitute for real-time navigation during active development.
These requirements describe an enterprise code intelligence platform, not an IDE extension or a language server. Building and maintaining such a platform is the technical foundation for making multi-language code navigation work in practice. The alternative, accepting the navigation failures and paying their costs indefinitely,becomes less tenable the larger and more complex the multi-language system grows.
Kuidas SMART TS XL Addresses Multi-Language Navigation
SMART TS XL is built on the premise that enterprise systems cannot be understood through the lens of any single language or any single repository. Its Software Intelligence platform ingests source code from every language and platform in the environment, parses each using language-specific analysis, and builds a unified cross-reference index that represents the relationships between elements regardless of which language they belong to. Navigation queries against that index return results that span language boundaries because the index models the complete system, not a language-specific slice of it.
The platform explicitly models the inter-language interfaces that standard tools ignore. A JCL step invoking a COBOL program by name is represented as a dependency in the cross-reference graph, connecting the JCL step to the COBOL program unit. A Java method writing to a database table is represented as a data dependency connecting the Java code to the table definition and from there to any other language that reads the same table. A COBOL copybook referenced by multiple programs is represented as a shared definition, so that any change to the copybook structure immediately surfaces all programs affected by the change, regardless of language. This explicit modeling of inter-language dependencies is what distinguishes a genuine cross-language navigation platform from a collection of language-specific tools operating in parallel.
SMART TS XL’s impact analysis capability demonstrates the practical value of this unified model. When a developer needs to understand the consequences of changing a shared component, such as a COBOL data definition, a database schema element, a Java interface, or a JCL procedure,the platform traces the dependency graph from that component across all language boundaries and returns a complete picture of what will be affected. The result is presented as a navigable report organized by language, by component, and by specific reference location, giving developers the complete information they need before making a change rather than discovering the consequences afterward. This capability directly addresses the risk accumulation described in the previous section, converting undetermined cross-language risk into quantified, enumerable impact.
Cross-Language Navigation as a Property of the Whole System
The central insight of this article is that code navigation in multi-language environments is a property of the whole system, not of any individual language tool. An IDE that navigates COBOL perfectly and a separate IDE that navigates Java perfectly do not together produce a system that navigates the COBOL-Java boundary. They produce two independent navigation systems with a gap between them, and that gap is where the most consequential relationships in the system live.
Closing that gap requires a different kind of tool: one that models the system as a whole, represents relationships across language boundaries as first-class entities, and provides navigation that follows those relationships wherever they lead. For organizations running complex, multi-language systems at enterprise scale, that capability is not a luxury. Every day of development without it is a day in which the cost of cross-language navigation failures accumulates: in the form of slower onboarding, longer incidents, riskier changes, and the gradual concentration of irreplaceable knowledge in the individuals who have manually built the cross-language mental models that the tools cannot provide.