About CSV to JSON Converter

Convert CSV to JSON and back. Handles headers, quotes, and special characters. Free, no sign-up required.

How to use

  1. Paste your CSV text into the left textarea. The first row is treated as headers by default — leave the 'First row is header' checkbox checked unless your file is genuinely headerless, in which case columns are named col0, col1, col2.
  2. Click the Convert arrow button to parse the CSV and emit a JSON array of objects on the right. The parser handles RFC 4180 quoted fields, embedded commas inside double quotes, and double-double-quote escape sequences ("" inside a quoted field becomes a literal quote).
  3. If your data flows the other direction, click the JSON to CSV button to reverse — paste a JSON array into the right side and get tabular CSV out, with the union of object keys as the header row and properly escaped values.
  4. Use Copy JSON to push the result to the clipboard for paste into JavaScript, Python, jq, or a database import. The output is standard double-quoted JSON, safe to feed into JSON.parse() or any spec-compliant parser.
  5. Click Clear to reset both panels when switching datasets. The conversion runs entirely in your browser with no upload — paste sensitive customer or financial data without it leaving your machine.
  6. For files with headers in non-ASCII (UTF-8 BOM, accented characters, CJK), check that your source file is genuinely UTF-8 encoded. Excel exports often save as UTF-16 LE or Windows-1252 by default, which causes garbled keys; export as 'CSV UTF-8' from File → Save As.

Examples

Simple table with headers
Input: name,age,city\nAlice,30,NYC\nBob,25,LA. Output: [{"name":"Alice","age":"30","city":"NYC"},{"name":"Bob","age":"25","city":"LA"}]. Note that 30 and 25 stay as strings — CSV has no native type system, so all values come out as strings unless you post-process with parseInt or Number().
Field with embedded comma
Input: id,address\n1,"123 Main St, Apt 4". The quoted address survives the comma — output: [{"id":"1","address":"123 Main St, Apt 4"}]. Without the quotes, RFC 4180 splits on every comma and the address would shred across two columns.
Escaped double quote
Input: id,quote\n1,"She said ""hi"" today". The "" pair represents a literal quote. Output: [{"id":"1","quote":"She said \"hi\" today"}]. The JSON output further escapes the inner quote with a backslash to stay valid.

Frequently asked questions

Why are all my numeric values quoted strings in the JSON output?
CSV is fundamentally a text format with no type information — '30' and '"30"' are indistinguishable in the file. The converter preserves every value as a string to avoid silent data loss (e.g., the leading zero in postal code '01234' would be destroyed by automatic Number() coercion). After conversion, run JSON.parse(json) and map the keys you know are numeric: rows.map(r => ({...r, age: Number(r.age)})).
How does the parser handle quotes inside fields per RFC 4180?
Per RFC 4180, a field that contains a comma, newline, or double quote must be wrapped in double quotes. To embed a literal double quote inside a quoted field, double it: "She said ""hi""" represents the string: She said "hi". Single quotes have no special meaning. The converter follows this strictly. Many real-world CSVs (especially from Excel before 2016) violate this — if you see broken parsing, check whether your file uses backslash-escaped quotes instead, which is non-standard.
Can it handle TSV or pipe-delimited data?
The current parser is comma-delimited only. For tab-separated values, do a global find-replace of \t to , before pasting (assuming your data has no commas). For pipe (|) delimited, same approach. If your data contains both tabs and commas legitimately, you need a delimiter-aware tool — Papa Parse (open source) and Python's csv module support custom delimiters.
What about nested objects and arrays?
Plain CSV is flat by design — one row, one record, one level deep. The converter cannot infer nesting from a header like 'address.street'. Common workarounds: (1) flatten on import, then post-process to nest by splitting key on dots; (2) use JSONL (one JSON object per line) instead of CSV for structured data; (3) for arrays, encode as semicolon-separated values inside one cell and split client-side: 'tags' column with 'red;blue;green' becomes {tags: ['red','blue','green']} after split(';').
Why does my Excel-exported CSV look wrong with weird symbols at the start?
Excel UTF-8 exports prepend a Byte Order Mark (BOM): the bytes EF BB BF. Most parsers strip it, but if you see the first column header contain or a strange prefix, that is the BOM showing through. Either re-save without BOM (some editors offer 'UTF-8 without BOM') or strip the first three bytes programmatically. The converter handles BOM transparently for the first cell.
How big a CSV can this handle?
Browser-only conversion is limited by available RAM and parsing time. Up to about 50,000 rows / 50 MB the tool stays responsive on a typical laptop. Beyond that, the textarea itself becomes sluggish and JSON.stringify of the result can hit single-tab memory ceilings. For multi-million-row datasets, use a streaming parser (csv-parse for Node.js, pandas read_csv with chunksize for Python) instead of an in-browser tool.
Does CSV preserve null values?
No — CSV has no null. An empty field between two commas could mean null, empty string, or zero depending on the producer's convention. The converter outputs empty strings ('') for empty fields. To distinguish, some pipelines emit a literal NULL token or NaN; you must agree on that convention out-of-band. JSON has a real null type, so when going JSON to CSV, null values become empty cells (potentially destroying information on round-trip).

Part of ToolFluency’s library of free online tools for Developer Tools. No account needed, no data leaves your device.