Deep learning has enabled remarkable improvements in grasp synthesis for previously unseen objects viewed from partial views. However, existing approaches lack the ability to explicitly reason about the full 3D geometry of the object when selecting a grasp, relying on indirect geometric reasoning derived when learning grasp success networks. This abandons common sense geometric reasoning, such as avoiding undesired robot object collisions. We propose to utilize learned 3D reconstruction to enable explicit geometric awareness in a grasping system. Our reconstruction network directly predicts the signed distance for points queried near the object. We leverage the structure of the reconstruction network to learn a grasp success classifier which serves as the objective function for our grasp planner. These are combined into a constrained, continuous grasp optimization problem that plans grasp while avoiding undesired collisions. Our results show that explicitly learning to reconstruct the 3D geometry outperforms alternative formulations for partial-view information based on real robot execution.