Due to vast points and irregular structure, labeling full points in large-scale point clouds is highly tedious and timeconsuming. To resolve this issue, we propose a novel point-based transformer network in weakly-supervised semantic segmentation, which only needs 0.1% point annotations. Our network introduces general local features, representing global factors from different neighborhoods based on their order positions. Then, we share query point weights to local features through point attention to reinforce impacts, which are essential in determining sparse point labels. Geometric encoding is introduced to balance query point impact and remind point position during training. As a result, one point in specific local areas can obtain global features from corresponding ones in other neighborhoods and reinforce from its query points. Experimental results on benchmark large-scale point clouds demonstrate our proposed network's state-of-the-art performance.