Open Access
27 August 2024 ninjaCap: a fully customizable and 3D printable headgear for functional near-infrared spectroscopy and electroencephalography brain imaging
Alexander von Lühmann, Sreekanth Kura, Walker Joseph O'Brien, Bernhard B. Zimmermann, Sudan Duwadi, De’Ja Rogers, Jessica E. Anderson, Parya Farzam, Cameron Snow, Anderson Chen, Meryem A. Yücel, Nathan Perkins, David A. Boas
Author Affiliations +
Abstract

Accurate sensor placement is vital for non-invasive brain imaging, particularly for functional near-infrared spectroscopy (fNIRS) and diffuse optical tomography (DOT), which lack standardized layouts such as those in electroencephalography (EEG). Custom, manually prepared probe layouts on textile caps are often imprecise and labor intensive. We introduce a method for creating personalized, 3D-printed headgear, enabling the accurate translation of 3D brain coordinates to 2D printable panels for custom fNIRS and EEG sensor layouts while reducing costs and manual labor. Our approach uses atlas-based or subject-specific head models and a spring-relaxation algorithm for flattening 3D coordinates onto 2D panels, using 10-5 EEG coordinates for reference. This process ensures geometrical fidelity, crucial for accurate probe placement. Probe geometries and holder types are customizable and printed directly on the cap, making the approach agnostic to instrument manufacturers and probe types. Our ninjaCap method offers 2.7±1.8 mm probe placement accuracy. Over the last five years, we have developed and validated this approach with over 50 cap models and 500 participants. A cloud-based ninjaCap generation pipeline along with detailed instructions is now available at openfnirs.org. The ninjaCap marks a significant advancement in creating individualized neuroimaging caps, reducing costs and labor while improving probe placement accuracy, thereby reducing variability in research.

1.

Introduction

1.1.

State of the Art and Motivation

Accurate placement and co-registration of probes and sensors on the head are crucial for non-invasive measurements of targeted neural activity in human neuroscience research. This process typically begins with researchers carefully identifying target brain areas using mapping strategies to brain-atlas-based coordinate systems,1 which then guide the layout, placement, and fixation of probes for data collection and analysis.

This applies not only to functional near-infrared spectroscopy (fNIRS)2,3 and electro-encephalography (EEG) but also to the emerging field of optically pumped magnetoencephalography (oMEG)4 and other head-worn sensors. In EEG, the 10-20 system and its derivatives (10-10 and 10-5) provide established, widely accepted standards for electrode placement.5 Using predefined relative distances between electrode positions and anatomical landmarks, the 10-20 system accommodates variations in head circumferences, ensuring consistent, standardized measurements across individuals. Currently, fNIRS lacks its own standardized layouts and caps, and researchers frequently use the EEG 10-20 system as a workaround.6 However, because the design of fNIRS probes is critical for optimal signal and spatial resolution, this approach has limitations.

  • 1. The distances between 10-20 coordinates are relative and depend on the individual head circumference, but fNIRS measurements are highly dependent on absolute source-detector distances. Channel distance affects sensitivities to hemodynamic changes in the brain versus superficial tissues, signal strength, and quality. As opposed to EEG, scaling of probe geometries with the head circumference is usually not desired in fNIRS. Even at a fixed standard 56 cm head circumference, the possible channel distances between 10-5 coordinates do not cluster around the typical target distances for fNIRS optodes but are instead spread across a wide range around those (see Fig. 1).

  • 2. The 10-20 coordinates constrain the shape and layout of fNIRS probe geometries, which specifically can become an issue in high-density probes such as those used for diffuse optical tomography (DOT). High-density grids or triagonal/hexagonal geometries, for instance, are often not compatible with the standard EEG coordinates.

Fig. 1

Euclidean channel distance distribution for first 12 neighbors from the 10-5 system on a 56 cm Colin 27 head. Common targets for fNIRS channel distances are <1.5  cm and 3 to 3.5 cm

NPh_11_3_036601_f001.png

The common alternative is for researchers to manually prepare caps, a laborious and imprecise process that involves aligning probe layouts with (virtual) head models and making cuts in the physical textiles in the corresponding approximate positions. This approach adds variability to the exact placement of probes on the individual’s head, and further variability is added if variations of head morphology, e.g., based on ethnic backgrounds7 or age,8 are not sufficiently captured by the cap designs. Digitizing and co-registering sensor positions on participants’ heads during experiments, e.g., with photogrammetric approaches,9,10 allows for the assessment of how well the actual anatomical and targeted positions are matching. However, if mismatches are identified, it is usually a cumbersome and iterative manual process to improve the probe layout on the cap. Moreover, even when a subject-specific head model (e.g., a structural MRI scan or a photogrammetric surface scan) and individual target regions are available, translating this information onto the physical cap is not straightforward.

To address these challenges and to leverage the advantages of widely available advanced rapid prototyping technologies, employing 3D printing for the creation of customized head caps presents a compelling solution that we have explored and refined over the last few years.

1.2.

Challenges in 3D Printing Head Caps

What are the challenges in 3D printing head caps for neuroscientific studies? (1) Printing flat objects that accurately conform to the curvature of the head and offer a uniform fit is a non-trivial task. Printing a whole (non-flattened) cap is technically unfeasible for most flexible materials and most users’ printers. Mapping the 3D shape of the head onto a 2D printing plane requires careful consideration of relaxation and deformation strategies and the right choice of the cap’s lattice structures. (2) Materials and manufacturing present another set of challenges: materials that offer the necessary stretchiness and durability need to be identified. The manufacturing process itself may involve manual labor, adding complexity to the production workflow. (3) Figuring out the best way to split the head into panels and how those panels are later reunited requires consideration of the print bed size and minimizing deformation, especially when dealing with three or four panels. (4) In the case of high-density (HD) fNIRS caps, which go beyond sparse fNIRS by enabling overlapping measurements, and ultra-high-density (UHD) fNIRS with probe distances <12  mm, general challenges are to (4.1) obtain uniform coverage, minimizing gaps between tiles or patches and to (4.2) optimize sensitivity, uniformity, resolution and minimize localization errors on the brain while (4.3) meeting geometrical constraints, e.g., feasible channel distances that take into consideration both obtainable signal quality and mechanical constraints from optodes and holder sizes.

1.3.

Our Solution

Solving the above challenges and incorporating years of our experience in head-gear and probe designs,11,12 we created a method and process that enables researchers to fabricate 3D printed caps with fully customizable integrated holders, eliminating the need for laborious manual cutting and preparation of standard EEG caps (see Fig. 2). We call these “ninjaCaps.”

Fig. 2

Conventional process from paradigm to registered measured brain activation. New step (IIIB): automatic generation and 3D printing of caps for customized probe layouts and head anatomies, instead of manual preparation of standard textile caps (IIIA).

NPh_11_3_036601_f002.png

This approach can significantly reduce both the time and cost associated with cap preparation, making it more accessible and affordable for researchers to conduct customized studies. Each experiment and even individual can have a dedicated cap, which eliminates the need for reusing and re-assembling caps, and thereby enhances experimental control and reduces potential sources of variability. The reproducibility of experiments can be improved by establishing a “virtual ground truth” through the standardized production of 3D-printed caps. The versatility of 3D printing enables the creation of customized and complex cap designs that can integrate any type of head-mounted sensor, for instance, fNIRS optodes, EEG electrodes, or other sensors or components such as accelerometers, in almost any geometry and mixture. This integration enhances mechanical stability and expands the range of potential applications in research settings. Finally, researchers can tailor the cap design to fit individual head sizes and shapes accurately, improving the quality and reliability of measurements. Rather than relying on standardized head models, the individual head geometry can be precisely replicated, ensuring optimal contact between sensors and the scalp.

In this article, we detail our approach step-by-step, elaborate on innovations and lessons learned, and share resources to enable the community to create their own 3D-printed caps.

2.

Methods

2.1.

Overview of ninjaCap Generation

We use our established MATLAB-based brain atlas software AtlasViewer12 for probe layout design and for mapping of sensor positions to the brain surface. Also in MATLAB, we project from curved head surfaces to 2D panels with a spring relaxation algorithm and use hex lattice structures to obtain a better deformation and uniform fit. We then automatically extrude and assemble panels and holders in the open source Blender3D (Blender Foundation) software and 3D print the caps using flexible thermoplastic polyurethane (TPU) filaments. Our material of choice in this work is ninjaFlex (Fenner Precision Polymers, Lititz, Pennsylvania, United States). The cap panels are then assembled with an ultrasonic welder. An overview of this process, which will be described in detail in the following sections, is shown in Fig. 3.

Fig. 3

Visualization of ninjaCap generation from probe design to output as detailed in the following sections. Step (a) Sec. 2.2; Step (b) Secs. 2.3 and 2.4; Step (c) Sec. 2.5; Step (d) Sec. 3.1.

NPh_11_3_036601_f003.png

To make the ninjaCaps available to the community, we are hosting an automated pipeline for cap generation on bfnirs.openfnirs.org. The input to this cloud-based pipeline is the desired head circumference (HC) of the cap together with a “Probe.SD” file that can be generated in AtlasViewer. The output is a set of four .STL files (cap panels) that can be 3D-printed with flexible TPU and is then assembled by the user.

2.2.

Head Model and Probe Design

ninjaCap generation is based on two key input elements: a 3D head model and a probe design specification. The head model provides anatomical guidance during the probe design to target the desired brain areas, and the head surface geometry is used for the generation of the cap panels. The probe design file includes the geometrical constraints (e.g., for anchoring), arrangement (e.g., source-detector separations), and types of probes (i.e., specifying the corresponding holders/grommets) that are then registered onto the surface of the head model. Figure 4 visualizes the source-detector arrangement, necessary anchoring information for probe registration, and the type of grommets (e.g., fNIRS optode or EEG electrode).

Fig. 4

Head-model-based probe generation and registration in AtlasViewer using the default Colin 27 Atlas (a) or other head models (b). Bilateral HD-DOT patches, each with seven sources and 16 detectors that are inter-connected with rigid springs. Patches are flexibly connected to anchor points. In this example, optodes have grommet types “#NIRX2” to allow for the placement of NIRx optodes on the cap. Windows show configuration details for the selected detector on C4.

NPh_11_3_036601_f004.png

2.2.1.

Head model

Our default surface model in AtlasViewer is the widely used Colin 27-head model.13 Other atlases such as the ICBM 1528 or individual MRI anatomies (e.g., NIFTI format) can be used as well; see https://github.com/BUNPC/AtlasViewer/wiki/Importing-Subject-Specific-Anatomy12 for a detailed tutorial for how to import them. The current ninjaCaps pipeline on bfnirs.openfnirs.org does not yet natively support user-defined head models, but the cap generation pipeline and the following process are agnostic to the head model used. Before cap generation, the head circumference (HC) has to be specified. We currently scale the head model linearly only with HC, but other distances can also be employed (in AtlasViewer: Iz to Nz distance and RPA to LPA distance).

2.2.2.

Probe design file

Users can design their probes in AtlasViewer, which are then saved in a “.SD” MATLAB file. Key elements of the .SD file for cap generation are a list of 3D node positions, anchor points, springs, 10-5 landmarks, and the head mesh to be used. Because AtlasViewer is designed for fNIRS, nodes are called optodes in the interface, but they can represent any other type of sensor as well. Available node/optode types are “source,” “detector,” or “dummy.” Source and detector type assignments are relevant to constructing an fNIRS probe and channel measurement list that is relevant for the acquisition and plotting of fNIRS signals, but not for the generation of the physical cap itself. All three node types always have a “Grommet Type” assigned to them. Grommet types (in the format “#<abcde>) identify the fully customizable 3D holders that will be printed on the cap. Nodes/optodes are interconnected by “springs” that are either flexible and can be stretched or rigid and have a fixed length, which constitutes a geometrical channel distance constraint. This spring-based approach enables the projection and registration of the probe geometry on the head surface. Another constraint during probe registration is generated by anchors: nodes can be anchored to 10-5 landmarks on the head, which enforces the node position to remain exactly on this landmark during spring relaxation. The user typically assigns such anchor points to a small set of nodes/optodes that are to be fixed to their corresponding anatomical landmark. When scaling the head model, the subset of grommets/dummies that are anchored to these relative locations scale with the head. By contrast, any distance between optodes constrained by a rigid spring is a fixed parameter that does not get scaled but remains constant to maintain the specified probe geometry. Two examples are as follows: (1) a cap can be generated with only “dummy” nodes that have EEG-electrode grommet types assigned to them. Anchoring these to standard 10-20 positions will result in a 3D-printed standard 10-20 EEG cap. (2) To place an HD-DOT geometry with fixed inter-optode distances over the corresponding somatosensory area for a finger tapping experiment, one could anchor one measurement optode to the C3/C4 10-20 position and at least three “dummy” optodes with springs without fixed distance constraints to 10-20 landmarks to further constrain the HD-DOT patch placement. Figure 4 shows an example of such a bilateral HD-DOT probe registered to the Colin atlas and provides details for its configuration. Please see Ref. 12 and https://github.com/BUNPC/AtlasViewer/wiki/Designing-3D-Source-Detector-Geometry-in-AtlasViewer for a detailed tutorial on how to design 3D source-detector geometries and the corresponding SD file in AtlasViewer.

2.3.

Flattening and Projection of Sensor Locations

With the curved head model with co-registered probe positions available, the crucial next step is to create flat panels from this head model that can be 3D-printed. The panels need to yield an accurate placement of each sensor holder on its intended position on the curved surface of the head after cap assembly. In our first ninjaCap versions, our approach was to calibrate points between the curved head and a flat template to then stretch flat panels to fit over the curved head surface. However, iterating over several cap generations has led to a different and more precise approach to project from 3D positions to 2D cap panels, depicted in Fig. 5.

Fig. 5

(I) Head model with a co-registered pre-frontal fNIRS probe. (II) Cutting into panels. (III) Flattening panels with iterative spring-based relaxation (steps 1 to 4). Flattening is shown for the panel of the left (L) side of the head.

NPh_11_3_036601_f005.png

A prerequisite of the following approach is that all of the EEG 10-5 points, as well as the sensor/dummy nodes from the registered probe on the head, are first incorporated as vertices into the surface mesh of the head.

The 3D head mesh is cut into four pieces [see Fig. 5(II)]. An axial cut through Inion (Iz) and 15 mm above Nasion (Nz) discards any part of the head mesh below the eyebrows (including the neck). With the head centered at the Iz-Cz-Nz midline, two sagittal cuts are made on either side, so the middle part has a width of 3/7th of the total distance between LPA and RPA through Cz. These sagittal cuts go very close to the reference points NFP1/2, NFP1/2h, I1/2, and I1/2h of the EEG 10-5 system and cut the upper part of the head into three parts (left side, mid, right side) that are then used for the generation of the left, right, and mid/top panels of the cap. The mid/top panel is then cut at the midline into two pieces. The choice for these cutting coordinates is a result from a joint optimization to yield surfaces that (a) can be flattened well and (b) that fit on the print beds of typical commercial 3D printers (30×30  cm).

The resulting three surfaces are then individually flattened and projected to 2D [see Fig. 5(III)] following an iterative procedure that treats the edges between vertices of the 3D meshes as springs. Importantly, we flatten the entire head surface meshes, not only the probe coordinates. During this procedure, distances of sensor/optode node vertices relative to their closest three 10-5 vertices are kept fixed. (1) We apply a normalized gravity force along the Z axis that pulls all vertices toward the X-Y plane at Z=0 with a very small displacement, normalized to a maximum of a 0.1 mm displacement per step. (2) We then apply spring forces that act on the relative distances in the curved mesh as consistently as possible, leading to a uniform distortion. (3) We repeat steps (1) and (2) until all vertices’ Z coordinates are approximately at 0 (within 0.1 mm). Holding the total area of the surface nominally constant when flattening ensures that positions on the flat panels end up in the corresponding curved positions on the head when all panels are re-assembled in a 3D cap. This approach preserves the local distances between 10-5 reference points, which also locally preserves the area and thus makes the area of the printable flattened panel and the area of its corresponding 3D headpiece roughly identical. Because optode/electrode positions are vertices in those meshes, the flattening transformation also yields the correct locations of the corresponding grommets to be placed on the 2D cap panels. Figure 6 shows the pseudocode for the key steps of the transformation. Videos 1 and 2 in the Appendix visualize the flattening and spring-relaxation process for a side and a mid-panel.

Fig. 6

Pseudocode for spring-relaxation approach to obtain flattened 2D panels.

NPh_11_3_036601_f006.png

2.4.

Panel STL Generation and Cap Assembly

To generate and finalize printable cap panels from the flattened 2D projections, several additional steps are necessary (Fig. 7): (IV) complete panel outlines; (V) fill panels with a hexagonal lattice that grommets can be attached to and that provide structure and uniform stretch; (VI) create panel seams and “welding tabs” for assembly after printing; and (VII) extrude the 2D panel to a printable 3D mesh, adding ear slits, grommets, and chin/neck strap holders according to coordinates and grommet types provided. Steps IV to VI are part of our MATLAB script; for step VII, our script makes use of the advanced 3D meshing and manipulation functionality of Blender 3D, via its internal Python interface.

Fig. 7

(IV) Defining virtual panel outlines and matching a template for the lower outline to side panels. (V) Filling the panel outlines with a hexagonal lattice. (VI) Generating welding tabs along virtual seams, considering normal and special cases with grommets close to the seam. (VII) 2D to 3D extrusion and assembly of panels in Blender3D: adding grommet types, ear slit, and chin (and neck) strap holders.

NPh_11_3_036601_f007.png

2.4.1.

Panel outlines

After flattening the meshes, we use the contours of the cut lines to generate the panel outlines. This maintains the absolute coordinates of projected grommets that will be placed later in the process. For the middle panel, the outline is defined by the sagittal and axial cutting lines after flattening. We cut the middle panel in half to enable printing on a print-bed with a 30-cm edge length. For the side panels, the sagittal cutting line after flattening defines the upper outline, but the lower part needs to be added: it defines the cap outline underneath the chin, ear, and above the neck. We use a template that we optimized over the years; it is laterally stretched to connect to the outmost points of the upper outline. See Fig. 7(IV).

2.4.2.

Hexagonal lattice structure

The outlines are then automatically filled with a hexagonal lattice with an edge length of 10 mm [see Fig. 7(V)]. We previously explored different geometries such as a square lattice or zig-zag segments. Those proved less suitable to achieve uniform stretch/deformation of the cap on the head. The 10-mm edge length of the hexagonal lattice is a trade-off between good deformation (smaller hexagons reduce the stretchiness of the cap, leading to less comfort and/or fit) and the minimum area coverage needed to ensure that grommets are always well-connected to the lattice and do not end up in empty spaces, as the diameters of most grommets across sensor types and manufacturers are above 10 mm.

2.4.3.

Panel seams

To form the final cap, the printed panels are assembled via ultrasonic welding. For this, each panel has “welding tabs” that must be aligned across the joint seam of a panel and its corresponding neighbor. Welding tabs are generated with a thicker outline that facilitates better connection during ultrasonic welding. We generate these as follows: using the sagittal cutting line as a virtual seam between both panels, we identify all edges within a fixed physical distance directed toward the seam. For each edge, we find its closest corresponding edge from the other panel and reposition both toward their shared intermediate point. Each edge of the pair is then extended out up to the end node of the other. Both together constitute the overlapping “welding tabs” for manual assembly (see Sec. 2.4.4). There are two special cases that need to be considered. (1) If one panel has more intersecting edges toward the seam than the other, we redirect and join two “welding tabs” from one panel to a single corresponding one from the other. (2) Grommets can be close to or overlapping with virtual seams. If a grommet’s location is within one hexagonal edge length of the virtual seam, we extend the overlapping edge to the next node on one panel side. Please see Fig. 7(VI) for visualizations.

2.4.4.

2D to 3D extrusion and assembly of ear slits, grommets, and holders

The resulting two side panels and two halves of the top panel, all with their generated outlines, hexagonal lattices, and welding tabs, are saved as 2D.STL files. Together with the corresponding grommet types, 2D coordinates, and rotation in a. digtpts textfile, these are processed by a Blender 3D Python script to assemble the final panels for printing [see Fig. 7(VII)]. We use Blender’s own Python with its native functions for the manipulation of meshes, e.g., Boolean joining and cutting out of objects and some conventional modules (e.g., SciPy) installed. The 2D panels are first extruded by 0.9 mm to create printable 3D volumes. For each listed grommet type, its corresponding STL file is loaded from a library folder and placed and merged to the panel’s hex lattice at its corresponding 2D coordinate. Using the four-letter identifier #<abcde> in the probe.SD design file as a naming convention (STLs are always named “grommet.stl” and saved in a folder with the same name as the identifier) enables easy adaptation and placement of fully customized 3D holders and objects on the panel. Standard chin and neck-strap holders are placed at pre-defined positions on the side panel. Finally, ear slit holes are stamped out of the side panels, and corresponding outline templates are placed. The ear slit outlines are centered at 15 mm up from LPA and RPA 10-20 reference points. These relative positions are the result of ongoing validation and user feedback to optimize fit and comfort. The whole process generates four cap panels: the left- and right-side panels and one mid panel that is cut in half to make it fit on print beds with a 30-cm edge length.

2.5.

Physical Cap Creation

The four resulting cap panel STL files are now ready to be printed with any 3D printer with a print bed >30×30  cm2 that supports printing with flexible thermoplastic polyurethane (TPU) filaments such as ninjaFlex. Printing thin features using this filament with the quality needed for our head caps depends heavily on a few settings. These are primarily the extruder temperature, nozzle offset, and line-to-line overlap. The temperature of the extrusion must be tailored to the thermal transfer properties of the nozzle on the printer. The highest quality combination for printing that we found is to use a hardened steel nozzle at 225°C and a print speed of 10 to 12  mm/s. Using nozzles made of either plated brass or copper requires the temperature to be 10 to 15 deg lower or the print speed to be increased by a proportional amount. If the extrusion temperature is set too high, the final part will have voids in it due to the filament leaking out of the nozzle as it travels, weakening the part. This can be identified by strings of the TPU crossing areas of the part that should not have any material. The nozzle offset needs to be dialed in for each printer and should be set close enough that the deposited line is compressed into the build plate such that the resulting line is about 1.5 to 2 times as wide as it is tall. This setting combined with a line-to-line overlap of 20% ensures that the individual lines on each layer are combined enough for the top and bottom layers to no longer show distinct paths of travel on a finished print. If the overlap is insufficient, the finished cap will experience weak adhesion and wear out prematurely.

The printed panels are then manually assembled into a cap. This is done using cost-efficient low-power handheld ultrasonic (US) welding devices such as the one depicted in Fig. 8(IX). We use a QUPPA clamp device, which is a 20 W 57 kHz countertop ultrasonic sealer to make welding points on PET, PS, PVC, and PE materials. To connect panels along the seam, the corresponding welding tab pair of each panel is placed on top of each other and then welded by gently pressing down with the US welding clamp. Welding time in seconds (set by the US welding device) and manual pressure at the welding clamp determine the quality and durability of the welding points along the seam. If pressure and/or welding time are too low, the TPU does not connect properly; if they are too high, the welding tabs can get destroyed. With the right settings, the resulting connections compare with properties (stretch, durability) of any other part of the 3D printed TPU mesh. After initial training, assembling a cap takes 0.5 to 1 h. As this cap already has all grommets printed at the desired locations, it is immediately ready for population and experimental use.

Fig. 8

(VIII) 3D printing of four cap panels. (IX) Manual assembly using ultrasonic welding clamp. (X) Finished ninjaCaps.

NPh_11_3_036601_f008.png

2.6.

Verification and Validation

The presented methodology for creating ninjaCaps has been refined over the last five years through an ongoing process of development, verification, and validation with several hundred participants. This process evaluated the accuracy of probe placement, along with the comfort, fit, and longevity of the caps (see Sec. 3). Due to the iterative nature of the development process, we focus on the quantification of performance metrics for the final design presented in this article. Throughout the iterations, we used quantitative and qualitative metrics for human labor and fit to the head: (1) cap stiffness, as visually assessed (too stiff would prevent a good fit and conforming to the head, whereas too flexible would increase motion artifacts and sensor placement precision); (2) cap comfort, as reported by users, (3) durability (number of uses before breaking) after assembly, and (4) difficulty to fabricate, based on our team’s feedback on time and training needed for printing and assembly, leading to an effort to reduce the complexity.

To quantify the precision of our final technique—translating 3D target coordinates on a head model to flat, printable cap panels, and subsequently to the physical probes at the target coordinates on the actual head—we employ the following procedure. We use two frequently used head models, the Colin 27 atlas (accurate single subject scan) and the ICBM 152 atlas (aligned average of 152 subjects) with 56 cm head circumference and known EEG 10-20 coordinates, which serves as our ground truth. We (1) fabricate ninjaCaps with grommets positioned at these coordinates and (2) create and print a version of each head model that features small indentations that denote the correct positions. Using the cap on each head, we then measure and report the distances between the centers of each grommet on the cap and the corresponding indentation on the models across the 10-20 positions.

3.

Results

3.1.

ninjaCap

Over time, starting in 2018, we have evaluated 20 models in the first generation (template-based panel generation) and then 50 models since August 2022 in the second generation. This second generation follows the spring-relaxation-based approach in this article. Figure 9(b) shows three such cap variants for fNIRS and HD-DOT. Mixed EEG-fNIRS caps and EEG-only caps have also been constructed. Because of the limited size of an average 3D printer’s print bed, side panels and the split mid panel need to be printed sequentially in three stages. Each print takes between 1 h and 45 min and 2 h and 30 min for the side panels (dependent on the cap size) and between 1 h and 15 min and 2 h for both mid-panel pieces combined. Currently, the overall cost of creating a cap is 5 to 7 h total printing time, 30  cm3 ninjaFlex volume (ca. $6.00), and 1.5 to 2 h of manual labor to print, clean, and assemble the panels. The current versions of our caps have been successfully used an average of over 50 cycles before wearing out or breaking.

Fig. 9

(a) Different head models. (b) Examples of three fNIRS probe types and ninjaCaps. First row with improved ear slit design.

NPh_11_3_036601_f009.png

3.2.

Verification and Validation

3.2.1.

Probe placement precision

When evaluated on the two 56 cm head circumference 3D-printed ground truth head models Colin 27 and ICBM152 with 10-20 marker indents, our ninjaCap approach yields an average probe placement error and standard deviation of err¯all=2.7±1.8  mm (Colin27: err¯Colin=2.2±1.6  mm | ICBM152: err¯ICBM=3.1±2.0  mm). Figure 10 shows the corresponding violin plots. Across both models, the error is evenly distributed across the whole head surface, but for Colin27, it is lower toward the medial and anterior regions of Fz, Cz (err¯Colin_central=1.7±1.3  mm), constituting most points at the lower end of the distribution, and larger toward the lateral and dorsal regions of F7/8, P7/8, and O1/2 (err¯Colin_lat/occ=2.7±1.7  mm), constituting most points at the upper end of the distribution.

Fig. 10

Violin plot of ninjaCap probe placement errors on 3D-printed ground truth head models (Colin 27, left, and ICBM 152, right) with EEG 10-20 positions. Blue dots: individual placement errors. Blue line/white dot: median/mean error. Gray vertical bar: interquartile range.

NPh_11_3_036601_f010.png

3.2.2.

User testing—fit and comfort

To date, more than 50 ninjaCaps have been tested by more than 10 independent research groups with 500+ participants. At the early stages of the cap design, some users reported discomfort caused by the caps’ ear slits, particularly for longer studies. This feedback was incorporated by testing different shapes, positions, and angles with our user base, resulting in a widened and backward—slanted ear slit with the position and angle reported in this article. Figure 9(b) shows both old and latest ear slit solutions (first row, prefrontal probe: latest ear slit; second and third rows, bilateral and whole head probe: older ear slit). Additional user feedback led to the addition of an optional strap similar to a chin strap that runs to the back of the head that can improve fit and uniform stretch on the head. The quality and amount of feedback received since then indicates comfort and performance comparable to other solutions that our users are accustomed to (such as widely used EEG textile caps, many of which are also used for fNIRS studies—e.g., the EASYCAP from Easycap GmbH, Germany).

3.2.3.

Durability and mechanical lesson learned

To improve the durability of the caps, we iterated on the thickness (number of layers) of the lattice of the printed panels. Too thin prints with lattice thickness 0.7  mm broke too easily and stretched too much after a few (10) uses; too thick prints did not conform well to the head. The best trade-off between both was found to be a thickness of 0.8 mm, which has been used ever since. Similarly, to optimize durability and stretchiness, we ended up using lattice (=hexagon edge) lengths of 10 mm and lattice widths of 0.85 mm. For robust connection of panels using the ultrasonic welder, we ended up with an optimal welding tab width of 2 mm. Another crucial set of parameters that were optimized is the 3D printing profile, which depends on the material and printer used. We provide the 3D printing profile for LulzBot TAZ Workhorse printers in the forum on openfnirs.org on request. Based on our and our user base’s experience with caps manufactured with these parameters, the number of usage cycles/lifetime of ninjaCaps is estimated to be comparable to conventional textile caps such as Easycaps.

3.3.

Availability

To make the ninjaCaps available to the community, we are hosting an automated pipeline for cap generation on bfnirs.openfnirs.org. The pipeline is based on the process described above and requires a .SD probe specification file. Generated printable cap panels can afterward be downloaded as .STL files. Probes can be generated with AtlasViewer, which is openly available on github.com/bunpc/atlasviewer as a MATLAB repository together with comprehensive tutorials. Some ninjaCaps variants for fNIRS and HD-DOT that we generated over the last years are also openly available for download on openfnirs.org/hardware/ninjacap.

4.

Discussion and Conclusion

4.1.

Alternative Approaches

On our path to the current ninjaCap generation, we considered alternative approaches to generating custom fNIRS/DOT headgear and caps. A brief summary of alternatives follows to contextualize and motivate the solution that we presented in this article.

4.1.1.

Combining 3D-printed probe holders with textile caps or custom headgear

Custom holders or modules can be 3D-printed and then manually incorporated into textile caps14,15 or can be used to constitute mechanical elements of headgear.11,16,17 However, this approach is labor intensive and not readily scalable.

4.1.2.

Direct 3D modeling and printing

Caps can be directly designed in a 3D environment, considering the curvature and specific anatomical features, and then 3D printed in a rigid or even flexible material.18 As in our approach, individual surface anatomies, e.g., from a structured MRI or a photogrammetric scan, can be used to design subject-specific caps or helmets, however, without the need for projecting onto a 2D plane. This has the potential to provide even higher precision, especially for rigid structures and helmets that can support heavy probes. To print flexible 3D caps, however, special 3D printers are needed because extruded TPU/ninjaFlex has non-isotropic mechanical properties. The strength in the “z” dimension between layers is not as high as within a layer, especially in the direction of the strands, and compromises in the flexibility/stretchability of the material must often be made.

4.1.3.

Template-based panel generation

Instead of projecting the 3D surface to 2D, one can use pre-defined templates of head shapes and sizes to generate 2D panels with a set of positions (e.g., 10-5 landmarks) calibrated to the 3D surface. Although easy to implement, a drawback is the reduced accuracy of probe positions, especially when individual head geometries are to be used. In our ninjaCaps, we started with this template-based panel generation and progressed toward the spring-relaxation algorithm presented in this article to improve precision.

4.1.4.

Conformal mapping

Projecting the 3D head surface onto a 2D plane with minimal distortion can be accomplished with the technique of conformal mapping, known as cartography. Conformal mapping is excellent for locally preserving angles and shapes at small scales but does not necessarily preserve the overall lengths or areas. Although it is a powerful mathematical tool, it is not the most suitable approach for this specific application as preserving exact lengths (channel distances) is a priority.

4.2.

Limitations and Further Improvements

The verification of probe placement precision using a 3D-printed ground truth head model showed that placement errors are small but can be further improved especially for lateral/dorsal regions of the head. There are two main sources of error in these regions. (1) Using a chinstrap with any cap, be it textile or TPU-based, leads to a localized non-uniform stretch that has a higher effect on these regions. This error could be reduced by factoring in the stretch in the probe placement before printing. However, this solution comes with the cost of an increased error whenever a chinstrap is not used. (2) We also noticed that our spring-relaxation-based panel flattening approach can lead to deformations that can contribute to a greater error in probe placement toward the lower panel seams. We are currently working on improving the flattening approach to further reduce any negative impacts and will continue to evaluate performance across different head circumferences.

It seems that head models that are more suitable to fit many user’s head shapes can improve performance. By default, we currently generate caps based on the widely used Colin 27 head model, the default in our AtlasViewer software. This head model is not equally suitable for all subjects as it is based on only one individual head anatomy. Head shapes differ across individuals and ethnicities, most visibly on the medial axis and particularly toward the back of the head. If no individual head model is available, a more suitable head model that is easily available directly also improves cap performance: one possible alternative to the Colin head model is the ICBM 152 atlas, which averages 152 subjects. As our cap generation and spring-relaxation-based panel flattening approach is agnostic to the head model used, it is currently up to the user to choose a model to optimize the cap fit to a particular target population. We view this limitation to be comparable to the need to purchase commercial fabric caps with a cut matching the population to be studied (e.g., EasyCap’s Asian versus Caucasian cuts).

Finally, it should be noted that, although the typical number of usage cycles of the ninjaCaps seems to be in the range of commercial textile caps, the cleaning process differs. Most textile caps are also specified for machine-based washing cycles >40°C, as well as the application of alcohol and other disinfectants. We clean our ninjaCaps with water and alcohol/disinfectants, but machine-washing is not an option for the TPU-based material.

4.3.

Outlook

We are continuing to expand and improve our approach beyond the current ninjaCap generation. One ongoing project is the design of an expandable cap that can be size-adjusted along the sagittal midline (one size fits all), which is a requested feature based on user feedback. This updated design would also speed up the total printing time to under 6 h and will make the same cap more adaptable to fit different subjects. Adjusting the cap size is particularly useful in high-density diffuse optical tomography experiments, in which whole-head designs (such as the one depicted in Fig. 8) contain up to several hundred optodes and swapping optodes across caps with different head sizes can be time-consuming. On the other side of the spectrum, to further facilitate the creation of subject-specific caps, photogrammetry can be incorporated to quickly create individual head scans that can then be used in the ninjaCap pipeline.

Introducing the 3D printing of flexible head caps to the field of brain imaging offers exciting possibilities for customized and tailored data collection. This approach enables the creation of head caps that perfectly match the individual’s head shape, optimize scalp contact, ensure precise optode/electrode placement, and thus enhance the signal quality, accuracy, and reliability of data acquisition while improving wearing comfort. It streamlines the cap preparation process, enhancing the efficiency and consistency of data collection both in group studies across diverse populations and in personalized brain imaging and therefore reduces variance and facilitates reproducibility in neuroscience research. Utilizing 3D printing technology, researchers are not only able to design and produce head caps with precise electrode/sensor placement but are also enabled to integrate multiple sensor types in the same cap for novel multimodal brain imaging approaches.

By sharing this tutorial and online resources for the ninjaCaps, we hope to aid the community in adopting these promising advances for their neuroscience research. We encourage interested readers and users to use the forum on openfnirs.org to ask for any details that they might find useful or missing.

5.

Appendix: Supplementary Information

Supplementary videos visualize the iterative spring-based flattening algorithm used to project 3D cap panel meshes to flat (printable) 2D space. Black dots depict vertices in the mesh. Red and blue dots depict fNIRS sources and detectors, respectively, which are also included as vertices in the mesh. All edges between vertices are treated as springs during the flattening process. If a spring deformation yields an edge length more than 3 mm greater/smaller than the target edge length on the 3D head, it is shown in red/cyan. Spring connections between optodes, according to the AtlasViewer probe design, are shown in green. Green edges without sources/detectors belong to “dummy optodes” that are placed to provide additional visual markers on the printed cap.

Video 1 Flattening example of a cap’s top panel (MP4, 1.39 MB [URL: https://doi.org/10.1117/1.NPh.11.3.036601.s1]).

Video 2 Flattening example of a cap’s left side panel (MP4, 1.26 MB [URL: https://doi.org/10.1117/1.NPh.11.3.036601.s2]).

Disclosures

A.v.L is currently consulting for NIRx Medical Technologies LLC/GmbH. All other authors have no conflicts of interest to declare.

Acknowledgments

The development of ninjaCap has been funded in part under the US National Institutes of Health (NIH Grant No. U01EB0239856). A.v.L. gratefully acknowledges funding from the German Federal Ministry of Education and Research (Grant No. BIFOLD24B). We acknowledge support by the Open Access Publication Fund of TU Berlin.

References

1. 

A. M. Dale, B. Fischl and M. I. Sereno, “Cortical surface-based analysis,” NeuroImage, 9 179 –194 https://doi.org/10.1006/nimg.1998.0395 NEIMEF 1053-8119 (1999). Google Scholar

2. 

F. Scholkmann et al., “A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology,” NeuroImage, 85 6 –27 https://doi.org/10.1016/j.neuroimage.2013.05.004 NEIMEF 1053-8119 (2014). Google Scholar

3. 

M. A. Yücel et al., “Best practices for fNIRS publications,” Neurophotonics, 8 012101 https://doi.org/10.1117/1.NPh.8.1.012101 (2021). Google Scholar

4. 

T. M. Tierney et al., “Optically pumped magnetometers: from quantum origins to multi-channel magnetoencephalography,” NeuroImage, 199 598 –608 https://doi.org/10.1016/j.neuroimage.2019.05.063 NEIMEF 1053-8119 (2019). Google Scholar

5. 

H. H. Jasper, “Report of the committee on methods of clinical examination in electroencephalography,” Electroencephalogr. Clin. Neurophysiol., 10 370 –375 https://doi.org/10.1016/0013-4694(58)90053-1 ECNEAZ 0013-4694 (1958). Google Scholar

6. 

G. A. Zimeo Morais, J. B. Balardin and J. R. Sato, “fNIRS Optodes’ Location Decider (fOLD): a toolbox for probe arrangement guided by brain regions-of-interest,” Sci. Rep., 8 3341 https://doi.org/10.1038/s41598-018-21716-z SRCEC3 2045-2322 (2018). Google Scholar

7. 

R. Ball et al., “A comparison between Chinese and Caucasian head shapes,” Appl. Ergonomics, 41 832 –839 https://doi.org/10.1016/j.apergo.2010.02.002 AERGBW 0003-6870 (2010). Google Scholar

8. 

V. Fonov et al., “Unbiased average age-appropriate atlases for pediatric studies,” NeuroImage, 54 313 –327 https://doi.org/10.1016/j.neuroimage.2010.07.033 NEIMEF 1053-8119 (2011). Google Scholar

9. 

I. Mazzonetto et al., “Smartphone-based photogrammetry provides improved localization and registration of scalp-mounted neuroimaging sensors,” Sci. Rep., 12 10862 https://doi.org/10.1038/s41598-022-14458-6 SRCEC3 2045-2322 (2022). Google Scholar

10. 

S. Homölle and R. Oostenveld, “Using a structured-light 3D scanner to improve EEG source modeling with more accurate electrode positions,” J. Neurosci. Methods, 326 108378 https://doi.org/10.1016/j.jneumeth.2019.108378 JNMEDT 0165-0270 (2019). Google Scholar

11. 

A. von Lühmann et al., “Headgear for mobile neurotechnology: looking into alternatives for EEG and NIRS probes,” in Proc. 7th Graz Brain-Comput. Interface Conf., 496 –501 (2017). https://doi.org/10.3217/978-3-85125-533-1 Google Scholar

12. 

C. M. Aasted et al., “Anatomical guidance for functional near-infrared spectroscopy: AtlasViewer tutorial,” Neurophotonics, 2 020801 https://doi.org/10.1117/1.NPh.2.2.020801 (2015). Google Scholar

13. 

C. J. Holmes et al., “Enhancement of MR images using registration for signal averaging,” J. Comput. Assist. Tomogr., 22 324 –333 https://doi.org/10.1097/00004728-199803000-00032 JCATD5 0363-8715 (1998). Google Scholar

14. 

E. E. Vidal-Rosas et al., “Wearable, high-density fNIRS and diffuse optical tomography technologies: a perspective,” Neurophotonics, 10 023513 https://doi.org/10.1117/1.NPh.10.2.023513 (2023). Google Scholar

15. 

M. A. Yaqub, S.-W. Woo and K.-S. Hong, “Compact, portable, high-density functional near-infrared spectroscopy system for brain imaging,” IEEE Access, 8 128224 –128238 https://doi.org/10.1109/ACCESS.2020.3008748 (2020). Google Scholar

16. 

P. Giacometti and S. G. Diamond, “Compliant head probe for positioning electroencephalography electrodes and near-infrared spectroscopy optodes,” J. Biomed. Opt., 18 027005 https://doi.org/10.1117/1.JBO.18.2.027005 JBOPFO 1083-3668 (2013). Google Scholar

17. 

D. Anaya et al., “Scalable, modular continuous wave functional near-infrared spectroscopy system (Spotlight),” J. Biomed. Opt., 28 065003 https://doi.org/10.1117/1.JBO.28.6.065003 JBOPFO 1083-3668 (2023). Google Scholar

18. 

E. Xu et al., “Flexible-circuit-based 3-D aware modular optical brain imaging system for high-density measurements in natural settings,” (2024). Google Scholar

Biographies of the authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Alexander von Lühmann, Sreekanth Kura, Walker Joseph O'Brien, Bernhard B. Zimmermann, Sudan Duwadi, De’Ja Rogers, Jessica E. Anderson, Parya Farzam, Cameron Snow, Anderson Chen, Meryem A. Yücel, Nathan Perkins, and David A. Boas "ninjaCap: a fully customizable and 3D printable headgear for functional near-infrared spectroscopy and electroencephalography brain imaging," Neurophotonics 11(3), 036601 (27 August 2024). https://doi.org/10.1117/1.NPh.11.3.036601
Received: 20 March 2024; Accepted: 24 July 2024; Published: 27 August 2024
Advertisement
Advertisement
KEYWORDS
Head

3D printing

Printing

3D modeling

Electroencephalography

Design

Sensors

Back to Top