good-paper-sentences-collection

好的句子收集

  1. Our evaluation shows that our approach obtains better results than task-specific handcrafted representations across different tasks and programming languages (我们的评估结果显示....)
  2. Leveraging machine learning models for predicting program properties(利用机器学习模型预测程序属性)
  3. We present a novel program representation (我们提出了一个新的...)
  4. In this paper, we demonstrate the power and generality of AST paths on the following tasks: (利用下面的任务说明了....)
  5. Empirical studies have shown that...(实验性的研究表明...)
  6. Raychev et al. (...等人)i.e. (也就是说) e.g. (比如)
  7. Automatic generation may produce a prohibitively large number of paths.(产生出令人望而却步的代价...)
  8. In Sections 3.1 and 3.2 we present CRFs and word2vec(在第几部分我们介绍了...) in this section(这一部分...)
  9. neural-network based approaches have shown(基于神经网络的方法...)
  10. we base the following definitions on pairwise paths between AST terminals.(base...on... 基于)
  11. if and only if...(当且仅当)
  12. This paper makes the following contributions.(本论文的贡献集中在以下方面)
  13. Section 6 and 7 are dedicated to the discussion of our results and conclusions.(第6部分和第七部分集中于)
  14. In the next section we describe(在下一部分我们描述了)
  15. To summarize, our E.T.-RNN approach would possibly work better than(总的来说.....)
  16. In this following,(在这之后)
  17. Khashman tests NN classifiers with different training to validation data ratios (测试了不同训练集和测试集比...)
  18. By employing different kernel functions, SVM technique can be applied to(通过应用不同的核函数...)
  19. The process of boosting continues until the loss function reduction becomes limited.(直至损失函数收敛)
  20. In accordance with the suggestion of Ala’raj and Abbod (2016b),(根据....的建议,不要总是使用according to)
  21. Ranging from early matrix factorization to recently emerged deep learning based methods,(从早期的矩阵分解,到现在出现的深度学习方法)
  22. Distinct from HOP-Rec, we contribute a new technique to integrate high-order connectivities into the prediction model,(与......不同,没有用到different from)
  23. This not only increases the model representation ability, but also boosts the performance for recommendation(提升了性能,使用了boost这个词,而不是improve)
  24. Towards this end, we perform experiments over user groups of different sparsity levels. (为此,我们开展了实验.....)
  25. For fair consideration, the latent dimensions of all compared baselines are set the same as in Table 2,(处于公平角度考虑....)
  26. The results demonstrate the significant superiority of RippleNet over strong baselines(结果展示了模型的明显的提升效果)
  27. Recently, many studies on extending deep learning approaches for graph data have emerged(最近出现了很多的研究.....)
  28. Our paper makes notable contributions summarized as follows(我们论文的贡献总结如下:)
  29. we refer the readers to [39](我们推荐/建议读者参考...)
  30. Attention mechanisms have become almost a de facto standard in many sequence-based tasks(注意力机制已经成为事实上的标准....)
  31. In general, the modeling process boils down to extracting local or global connectivity patterns between entities(通常,建模过程归结为....)
  32. we show marked performance gains in comparison to state-of-the-art methods on all datasets.(和目前最好的结果表现出了显著的结果)
  33. To the best of our knowledge,(就我们目前所知)
  34. aroused considerable research interest(引起了很大的研究兴趣)
  35. we find that COMPGCN outperforms all the existing methods in 4 out of 5 metrics on FB15k-237 and in 3 out of 5 metrics on WN18RR dataset.(数据集里的在4个里面的五个指标上取得了更好的结果)
  36. We defer this as future work(我们推迟这个作为将来的工作)
  37. a blowup in the number of parameters that need to be estimated.(大量需要估计的参数)
  38. Another approach for graph embeddings is thus to leverage proven approaches for language embeddings.(使用已经被证明过的方法...)
  39. we also discuss quality metrics that provide ways to measure quantitative aspects of these dimensions. (讨论定量的方面....)
  40. GNNs are notorious for their poor scalability.(GNN因差可扩放性而臭名昭著)
  41. We speculate that(我们推测...)
  42. In this setting, we compare the(在这种设置下)
  43. we leave these results out of our comparison table.(我们在对比结果中排除了....)
  44. Three benchmark datasets (FB15k-237, WN18RR and FB15k-237-Attr) are utilized in this study. (在本文中使用了...数据集)
  45. Our work is mainly related to two lines of research(我们的工作主要与两方面的研究有关)
  46. Empirically, our model yields considerable performance improvements over existing embedding models,(我们的论文取得了很大的效果提升)
  47. We empirically evaluate different choices of entity representations and relation representations under this framework on the canonical link prediction task(我们在典型的,标准的任务上评估了)
  48. SEEK can achieve either state-of-the-art or highly competitive performance on a variety of benchmarks for KGE compared with existing methods.(和现在的方法相比,方法达到了有竞争力或者目前最优的解)
  49. Numerous efforts have since continued to push the boundaries of recurrent language models(人们一直不断努力扩大模型的界限)
  50. Our overarching interest is whether(我们首要的兴趣是...)
  51. Our experimental study provides additional evidence for this finding.(我们的实验为之间的发现提供了额外的证明)
  52. Similar remarks hold for RESCAL and DistMult as well as (albeit to a smaller extent) ConvE and TransE.(类似的说法对于....也成立)
  53. RESCAL (Nickel et al., 2011), which constitutes one of the first KGE models(Resscal被视作KGE的第一个工作之一)
  54. predicting the properties of molecules and materials using machine learning (and especially deep learning) is still in its infancy.(使用机器学习预测化学分子或者材料的属性仍然处在初期阶段)
  55. most research applying machine learning to chemistry tasks has revolved around feature engineering.(大多数的研究都是围绕...)
  56. empowering HGT to maintain dedicated representation for different types of nodes and edges. (使HGT能够为不同的node和edge获得专门的表示)
  57. Figure 1 depicts he macro-structure of Mixer.(图1描绘了整体结构)
  58. Vinyals et al. [32] and Ravi and Larochelle [24] apply Matching Networks using cosine distance. However for both Prototypical Networks and Matching Networks any distance is permissible(对于模型来说...都是允许的)
  59. For Protypical Networks, we conjecture this is primarily due to cosine distance not being a Bregman divergence(我们猜测某种现象/结果可能是因为...)
  60. While suggestive as a research result, in terms of practical applications, the zero-shot performance of GPT-2 is still far from use-able.(作为实验结果有启发性)
  61. We hold that the poor performance of the pre-trained multimodal model may be attributed to the fact that the pre-training datasets and objects have gaps in information extraction tasks. (我们认为/我们假设)
  62. To steer our models towards appropriate behaviour at a more fine-grained level, we rely heavily on our models themselves as tools.(为了使模型更能够…)