Organ segmentation, or annotation, is an essential step for a variety of radiologic purposes such as automated organ detection, automated lesion detection, and radiotherapy. Convolutional Neural Networks (CNN) are a class of neural network that requires large amounts of training data for sensitive and specific image analysis. Medical image annotation of reference standard training data is costly and time consuming for relevant clinically experienced professionals. Here, we evaluate the feasibility of using crowdsourcing from untrained workers as a viable modality for large-scale data annotation. This pilot study evaluates the accuracy and usage viability of crowdsourced kidney segmentations. 42 CT scans were labeled by 72 users on the Robovision AI platform and their submissions averaged. Primary validation was conducted by comparing the crowd’s submissions to reference segmentations. Crowdsourced segmentations and expert-labeled segmentations were then used individually and together as training data for separate CNN models. We found that the performance of the model trained on crowdsourcing data (Dice score = 0.904 ± 0.026) was not significantly different (P = 0.50) compared to the performance of the expert-abeled model (Dice Score = 0.885 ± 0.112). When trained on a combined set, the CNN performance achieved a comparable result (Dice Score = 0.932 ± 0.040). These data suggest that untrained workers can be used as cost-effective alternatives to expert segmentation in radiologic kidney segmentation. This presents a new modality for scalable, medical imaging data generation.